Nov 8 00:23:35.138808 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:35.138850 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.138865 kernel: BIOS-provided physical RAM map: Nov 8 00:23:35.138877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:23:35.138889 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:23:35.138901 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:23:35.138915 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:23:35.138933 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 8 00:23:35.138947 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 8 00:23:35.138961 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:23:35.138975 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:23:35.138989 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:23:35.139003 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:23:35.139015 kernel: NX (Execute Disable) protection: active Nov 8 00:23:35.139035 kernel: APIC: Static calls initialized Nov 8 00:23:35.139050 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:23:35.139066 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 8 00:23:35.139081 kernel: SMBIOS 3.1.0 present. Nov 8 00:23:35.139095 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 8 00:23:35.139110 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:23:35.139126 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:23:35.139138 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 8 00:23:35.139152 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:23:35.139166 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:23:35.139180 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:23:35.139755 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:35.139772 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:35.139786 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:23:35.139800 kernel: tsc: Detected 2593.904 MHz processor Nov 8 00:23:35.139813 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:35.139826 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:35.139839 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:23:35.139852 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:23:35.139870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:35.139883 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:23:35.139895 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:23:35.139908 kernel: Using GB pages for direct mapping Nov 8 00:23:35.139921 kernel: Secure boot disabled Nov 8 00:23:35.139934 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:35.139947 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:23:35.139966 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.139982 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.139996 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 8 00:23:35.140010 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:23:35.140024 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140038 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140052 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140069 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140083 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140096 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140110 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140124 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:23:35.140138 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 8 00:23:35.140151 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:23:35.140165 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:23:35.140182 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:23:35.140197 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:23:35.140211 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:23:35.140224 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:23:35.140249 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:23:35.140263 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 8 00:23:35.140277 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:23:35.140291 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:23:35.140304 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:23:35.140321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:23:35.140335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:23:35.140349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:23:35.140362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:23:35.140376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:23:35.140390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:23:35.140404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:23:35.140418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:23:35.140432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:23:35.140448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:23:35.140462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:23:35.140476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:23:35.140489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:23:35.140503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:23:35.140517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:23:35.140530 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:23:35.140544 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:23:35.140558 kernel: Zone ranges: Nov 8 00:23:35.140575 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:35.140588 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:23:35.140602 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:35.140615 kernel: Movable zone start for each node Nov 8 00:23:35.140629 kernel: Early memory node ranges Nov 8 00:23:35.140643 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:23:35.140656 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:23:35.140670 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:23:35.140684 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:35.140700 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:23:35.140714 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:35.140728 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:23:35.140742 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 8 00:23:35.140755 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:23:35.140769 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:23:35.140783 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:35.140796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:35.140810 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:35.140826 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:23:35.140840 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:23:35.140854 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:23:35.140868 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:23:35.140882 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:35.140896 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:23:35.140910 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:23:35.140923 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:23:35.140937 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:23:35.140953 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:23:35.140967 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:23:35.140982 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.140997 kernel: random: crng init done Nov 8 00:23:35.141010 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:23:35.141024 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:23:35.141037 kernel: Fallback order for Node 0: 0 Nov 8 00:23:35.141051 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 8 00:23:35.141067 kernel: Policy zone: Normal Nov 8 00:23:35.141091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:35.141106 kernel: software IO TLB: area num 2. Nov 8 00:23:35.141124 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 317592K reserved, 0K cma-reserved) Nov 8 00:23:35.141139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:23:35.141154 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:35.141168 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:35.141183 kernel: Dynamic Preempt: voluntary Nov 8 00:23:35.141198 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:35.141214 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:35.141287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:23:35.141327 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:35.141340 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:35.141352 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:35.141366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:35.141381 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:23:35.141398 kernel: Using NULL legacy PIC Nov 8 00:23:35.141412 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:23:35.141423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:35.141436 kernel: Console: colour dummy device 80x25 Nov 8 00:23:35.141451 kernel: printk: console [tty1] enabled Nov 8 00:23:35.141465 kernel: printk: console [ttyS0] enabled Nov 8 00:23:35.141479 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:23:35.141491 kernel: ACPI: Core revision 20230628 Nov 8 00:23:35.141505 kernel: Failed to register legacy timer interrupt Nov 8 00:23:35.141519 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:35.141534 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:23:35.141555 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:23:35.141569 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:23:35.141581 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:23:35.141594 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:23:35.141608 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:23:35.141622 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:23:35.141634 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:23:35.141648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 8 00:23:35.141666 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:23:35.141680 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:23:35.141694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:35.141707 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:35.141722 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:35.141737 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:23:35.141752 kernel: RETBleed: Vulnerable Nov 8 00:23:35.141767 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:23:35.141781 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:35.141796 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:35.141814 kernel: active return thunk: its_return_thunk Nov 8 00:23:35.141829 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:23:35.141843 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:35.141858 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:35.141873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:35.141887 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:23:35.141902 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:23:35.141917 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:23:35.141932 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:35.141947 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:23:35.141962 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:23:35.141980 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:23:35.141995 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:23:35.142010 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:35.142025 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:35.142040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:35.142054 kernel: landlock: Up and running. Nov 8 00:23:35.142067 kernel: SELinux: Initializing. Nov 8 00:23:35.142081 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.142095 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.142111 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:23:35.142125 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142144 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142159 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142174 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:23:35.142194 kernel: signal: max sigframe size: 3632 Nov 8 00:23:35.142208 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:35.142224 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:35.142252 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:23:35.142266 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:35.142279 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:35.142297 kernel: .... node #0, CPUs: #1 Nov 8 00:23:35.142311 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:23:35.142326 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:23:35.142341 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:23:35.142355 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:35.142369 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 8 00:23:35.142384 kernel: devtmpfs: initialized Nov 8 00:23:35.142398 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:35.142415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:23:35.142429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:35.142443 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:23:35.142457 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:35.142471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:35.142485 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:35.142500 kernel: audit: type=2000 audit(1762561413.031:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:35.142514 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:35.142528 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:35.142545 kernel: cpuidle: using governor menu Nov 8 00:23:35.142559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:35.142573 kernel: dca service started, version 1.12.1 Nov 8 00:23:35.142587 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:23:35.142601 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:35.142616 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:35.142629 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:35.142643 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:35.142658 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:35.142674 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:35.142688 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:35.142703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:35.142716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:35.142731 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:35.142745 kernel: ACPI: Interpreter enabled Nov 8 00:23:35.142759 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:23:35.142774 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:35.142788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:35.142804 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:23:35.142819 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:23:35.142833 kernel: iommu: Default domain type: Translated Nov 8 00:23:35.142847 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:35.142861 kernel: efivars: Registered efivars operations Nov 8 00:23:35.142875 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:35.142889 kernel: PCI: System does not support PCI Nov 8 00:23:35.142903 kernel: vgaarb: loaded Nov 8 00:23:35.142916 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:23:35.142933 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:35.142947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:35.142960 kernel: pnp: PnP ACPI init Nov 8 00:23:35.142974 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:23:35.142988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:35.143002 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:35.143014 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:23:35.143028 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:23:35.143043 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:35.143060 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:23:35.143073 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:23:35.143104 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:23:35.143119 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.143132 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.143144 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:35.143158 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:35.143173 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:35.143186 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:23:35.143209 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Nov 8 00:23:35.143223 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:23:35.143268 kernel: Initialise system trusted keyrings Nov 8 00:23:35.143283 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:23:35.143298 kernel: Key type asymmetric registered Nov 8 00:23:35.143313 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:35.143328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:35.143343 kernel: io scheduler mq-deadline registered Nov 8 00:23:35.143358 kernel: io scheduler kyber registered Nov 8 00:23:35.143377 kernel: io scheduler bfq registered Nov 8 00:23:35.143392 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:35.143407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:35.143422 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:35.143437 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:23:35.143451 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:23:35.143639 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:23:35.143760 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:23:34 UTC (1762561414) Nov 8 00:23:35.143874 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:23:35.143892 kernel: intel_pstate: CPU model not supported Nov 8 00:23:35.143906 kernel: efifb: probing for efifb Nov 8 00:23:35.143920 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:23:35.143935 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:23:35.143949 kernel: efifb: scrolling: redraw Nov 8 00:23:35.143963 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:23:35.143977 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:35.143991 kernel: fb0: EFI VGA frame buffer device Nov 8 00:23:35.144008 kernel: pstore: Using crash dump compression: deflate Nov 8 00:23:35.144023 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:23:35.144036 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:35.144051 kernel: Segment Routing with IPv6 Nov 8 00:23:35.144065 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:35.144079 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:35.144093 kernel: Key type dns_resolver registered Nov 8 00:23:35.144107 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:35.144121 kernel: sched_clock: Marking stable (1051003600, 70623400)->(1423534800, -301907800) Nov 8 00:23:35.144138 kernel: registered taskstats version 1 Nov 8 00:23:35.144152 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:35.144166 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:35.144180 kernel: Key type .fscrypt registered Nov 8 00:23:35.144193 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:35.144207 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:35.144221 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:35.144249 kernel: ima: No architecture policies found Nov 8 00:23:35.144262 kernel: clk: Disabling unused clocks Nov 8 00:23:35.144278 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:35.144291 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:35.144305 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:35.144328 kernel: Run /init as init process Nov 8 00:23:35.144341 kernel: with arguments: Nov 8 00:23:35.144355 kernel: /init Nov 8 00:23:35.144369 kernel: with environment: Nov 8 00:23:35.144385 kernel: HOME=/ Nov 8 00:23:35.144399 kernel: TERM=linux Nov 8 00:23:35.144420 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:35.144439 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:35.144455 systemd[1]: Detected architecture x86-64. Nov 8 00:23:35.144470 systemd[1]: Running in initrd. Nov 8 00:23:35.144486 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:35.144501 systemd[1]: Hostname set to . Nov 8 00:23:35.144518 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:35.144536 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:35.144553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:35.144569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:35.144586 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:35.144602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:35.144617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:35.144634 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:35.144654 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:35.144670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:35.144686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:35.144702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:35.144718 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:35.144734 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:35.144750 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:35.144766 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:35.144785 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:35.144801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:35.144817 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:35.144833 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:35.144849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:35.144865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:35.144881 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:35.144897 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:35.144913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:35.144932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:35.144948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:35.144964 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:35.144980 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:35.144997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:35.145037 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:23:35.145076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:35.145093 systemd-journald[176]: Journal started Nov 8 00:23:35.145126 systemd-journald[176]: Runtime Journal (/run/log/journal/f44d69a756bd4d9180e56da1d900efca) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:35.141578 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:23:35.156269 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:35.159716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:35.169734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:35.176683 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:35.185407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:35.185839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:35.190516 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:23:35.196032 kernel: Bridge firewalling registered Nov 8 00:23:35.193738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:35.205441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:35.216423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:35.222252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:35.238407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:35.244739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:35.262432 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:35.267008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:35.269769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:35.279159 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:35.285567 dracut-cmdline[205]: dracut-dracut-053 Nov 8 00:23:35.291299 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.305659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:35.318054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:35.335874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:35.362818 systemd-resolved[218]: Positive Trust Anchors: Nov 8 00:23:35.362841 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:35.362898 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:35.388873 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 8 00:23:35.392463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:35.397975 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:35.411250 kernel: SCSI subsystem initialized Nov 8 00:23:35.422251 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:35.434260 kernel: iscsi: registered transport (tcp) Nov 8 00:23:35.455090 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:35.455194 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:35.492540 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:35.502423 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:35.531050 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:35.531160 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:35.534154 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:35.575260 kernel: raid6: avx512x4 gen() 18449 MB/s Nov 8 00:23:35.594247 kernel: raid6: avx512x2 gen() 18455 MB/s Nov 8 00:23:35.613248 kernel: raid6: avx512x1 gen() 18426 MB/s Nov 8 00:23:35.632249 kernel: raid6: avx2x4 gen() 18370 MB/s Nov 8 00:23:35.651242 kernel: raid6: avx2x2 gen() 18256 MB/s Nov 8 00:23:35.671370 kernel: raid6: avx2x1 gen() 13989 MB/s Nov 8 00:23:35.671423 kernel: raid6: using algorithm avx512x2 gen() 18455 MB/s Nov 8 00:23:35.693001 kernel: raid6: .... xor() 27599 MB/s, rmw enabled Nov 8 00:23:35.693040 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:23:35.716265 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:35.868256 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:35.878164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:35.887394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:35.900404 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:23:35.905010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:35.924410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:35.937771 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 8 00:23:35.964405 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:35.977490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:36.019893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:36.037491 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:36.066991 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:36.073818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:36.079664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:36.085193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:36.096415 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:36.115607 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:36.129108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:36.132064 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:36.129316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:36.140533 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:36.146344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:36.147328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.153785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.169349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.182963 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:36.183031 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:36.184489 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:23:36.191467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:36.192610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.207899 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:23:36.209894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.216640 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:23:36.231340 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:23:36.231396 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 8 00:23:36.231418 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:23:36.245937 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:23:36.247582 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:23:36.247645 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:23:36.247665 kernel: scsi host0: storvsc_host_t Nov 8 00:23:36.250703 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:36.253430 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:23:36.253472 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 8 00:23:36.253511 kernel: scsi host1: storvsc_host_t Nov 8 00:23:36.254247 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 8 00:23:36.270270 kernel: PTP clock support registered Nov 8 00:23:36.325248 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:23:36.325330 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:23:36.329646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.345053 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:23:36.345119 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:23:36.347031 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:23:36.348558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:36.707813 systemd-resolved[218]: Clock change detected. Flushing caches. Nov 8 00:23:36.724102 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:23:36.724390 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:36.725729 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:23:36.750785 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:23:36.751104 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:23:36.751616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:36.761867 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:23:36.762107 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:23:36.762282 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:23:36.762411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:36.767721 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:23:36.805721 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: VF slot 1 added Nov 8 00:23:36.816400 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:23:36.816469 kernel: hv_pci 2848a79f-6661-4a74-bcb1-590f89001639: PCI VMBus probing: Using version 0x10004 Nov 8 00:23:36.822877 kernel: hv_pci 2848a79f-6661-4a74-bcb1-590f89001639: PCI host bridge to bus 6661:00 Nov 8 00:23:36.823167 kernel: pci_bus 6661:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:23:36.825879 kernel: pci_bus 6661:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:23:36.831010 kernel: pci 6661:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:23:36.834731 kernel: pci 6661:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:36.838039 kernel: pci 6661:00:02.0: enabling Extended Tags Nov 8 00:23:36.849772 kernel: pci 6661:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6661:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:23:36.855686 kernel: pci_bus 6661:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:23:36.856017 kernel: pci 6661:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:37.030822 kernel: mlx5_core 6661:00:02.0: enabling device (0000 -> 0002) Nov 8 00:23:37.034717 kernel: mlx5_core 6661:00:02.0: firmware version: 14.30.5006 Nov 8 00:23:37.250000 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:23:37.253885 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: VF registering: eth1 Nov 8 00:23:37.257741 kernel: mlx5_core 6661:00:02.0 eth1: joined to eth0 Nov 8 00:23:37.257976 kernel: mlx5_core 6661:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:23:37.278726 kernel: mlx5_core 6661:00:02.0 enP26209s1: renamed from eth1 Nov 8 00:23:37.280118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:37.299725 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (454) Nov 8 00:23:37.316541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:23:37.324718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:23:37.331390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:23:37.347944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:37.360911 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:23:37.373720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:37.383726 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:38.394941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:38.395016 disk-uuid[603]: The operation has completed successfully. Nov 8 00:23:38.490128 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:38.490245 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:38.502886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:38.508453 sh[716]: Success Nov 8 00:23:38.534723 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:23:38.873122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:38.898871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:38.905289 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:38.923429 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:38.923512 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:38.926626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:38.929131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:38.931493 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:39.246175 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:39.253130 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:39.268970 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:39.277840 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:39.304781 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:39.304858 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:39.307114 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:39.344738 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:39.355946 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:39.361088 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:39.371511 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:39.374167 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:39.388889 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:39.395409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:39.426246 systemd-networkd[900]: lo: Link UP Nov 8 00:23:39.426256 systemd-networkd[900]: lo: Gained carrier Nov 8 00:23:39.428459 systemd-networkd[900]: Enumeration completed Nov 8 00:23:39.428777 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:39.436996 systemd[1]: Reached target network.target - Network. Nov 8 00:23:39.438123 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:39.438129 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:39.498724 kernel: mlx5_core 6661:00:02.0 enP26209s1: Link up Nov 8 00:23:39.528740 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: Data path switched to VF: enP26209s1 Nov 8 00:23:39.529608 systemd-networkd[900]: enP26209s1: Link UP Nov 8 00:23:39.529757 systemd-networkd[900]: eth0: Link UP Nov 8 00:23:39.529964 systemd-networkd[900]: eth0: Gained carrier Nov 8 00:23:39.529979 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:39.538489 systemd-networkd[900]: enP26209s1: Gained carrier Nov 8 00:23:39.568779 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:40.208641 ignition[899]: Ignition 2.19.0 Nov 8 00:23:40.208653 ignition[899]: Stage: fetch-offline Nov 8 00:23:40.208718 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.208729 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.208840 ignition[899]: parsed url from cmdline: "" Nov 8 00:23:40.208844 ignition[899]: no config URL provided Nov 8 00:23:40.208850 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:40.208862 ignition[899]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:40.208868 ignition[899]: failed to fetch config: resource requires networking Nov 8 00:23:40.209074 ignition[899]: Ignition finished successfully Nov 8 00:23:40.227187 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:40.235864 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:23:40.250577 ignition[909]: Ignition 2.19.0 Nov 8 00:23:40.250588 ignition[909]: Stage: fetch Nov 8 00:23:40.250841 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.250854 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.250963 ignition[909]: parsed url from cmdline: "" Nov 8 00:23:40.250967 ignition[909]: no config URL provided Nov 8 00:23:40.250973 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:40.250980 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:40.250999 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:23:40.359320 ignition[909]: GET result: OK Nov 8 00:23:40.359468 ignition[909]: config has been read from IMDS userdata Nov 8 00:23:40.363500 unknown[909]: fetched base config from "system" Nov 8 00:23:40.359510 ignition[909]: parsing config with SHA512: f842820e8f3007f8e74b0fcc9d574c30073b7d540373a5b8cb2a3a231cd551fbc8469d88ccbeb4b39e22596c1395c3cd79c5611e060ecb40558a29b1db514d65 Nov 8 00:23:40.363507 unknown[909]: fetched base config from "system" Nov 8 00:23:40.365983 ignition[909]: fetch: fetch complete Nov 8 00:23:40.363512 unknown[909]: fetched user config from "azure" Nov 8 00:23:40.365990 ignition[909]: fetch: fetch passed Nov 8 00:23:40.368726 ignition[909]: Ignition finished successfully Nov 8 00:23:40.384136 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:23:40.397004 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:40.418486 ignition[915]: Ignition 2.19.0 Nov 8 00:23:40.418497 ignition[915]: Stage: kargs Nov 8 00:23:40.418745 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.418764 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.423856 ignition[915]: kargs: kargs passed Nov 8 00:23:40.423904 ignition[915]: Ignition finished successfully Nov 8 00:23:40.435323 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:40.445942 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:40.469468 ignition[921]: Ignition 2.19.0 Nov 8 00:23:40.469480 ignition[921]: Stage: disks Nov 8 00:23:40.469714 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.469728 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.477992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:40.475387 ignition[921]: disks: disks passed Nov 8 00:23:40.482454 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:40.475434 ignition[921]: Ignition finished successfully Nov 8 00:23:40.491256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:40.494747 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:40.514012 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:40.516647 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:40.527881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:40.584963 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:23:40.590658 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:40.604806 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:40.703718 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:40.704777 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:40.707344 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:40.742855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:40.757727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:23:40.760819 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:40.762631 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:40.768355 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:40.768382 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:40.770263 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:23:40.775660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:40.785786 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:40.776602 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:40.793019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:40.796759 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:40.808916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:41.331186 coreos-metadata[942]: Nov 08 00:23:41.331 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:41.336536 coreos-metadata[942]: Nov 08 00:23:41.335 INFO Fetch successful Nov 8 00:23:41.336536 coreos-metadata[942]: Nov 08 00:23:41.335 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:41.348867 coreos-metadata[942]: Nov 08 00:23:41.346 INFO Fetch successful Nov 8 00:23:41.360863 coreos-metadata[942]: Nov 08 00:23:41.360 INFO wrote hostname ci-4081.3.6-n-036966ce4d to /sysroot/etc/hostname Nov 8 00:23:41.366983 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:41.380841 systemd-networkd[900]: eth0: Gained IPv6LL Nov 8 00:23:41.426206 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:41.472058 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:41.477173 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:41.482116 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:42.376235 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:42.383845 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:42.389183 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:42.405023 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:42.410414 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.433966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:42.442046 ignition[1058]: INFO : Ignition 2.19.0 Nov 8 00:23:42.442046 ignition[1058]: INFO : Stage: mount Nov 8 00:23:42.445605 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:42.445605 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:42.450869 ignition[1058]: INFO : mount: mount passed Nov 8 00:23:42.450869 ignition[1058]: INFO : Ignition finished successfully Nov 8 00:23:42.454652 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:42.462887 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:42.473364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:42.503722 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Nov 8 00:23:42.507710 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.507747 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:42.512089 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:42.518717 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:42.520897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:42.551038 ignition[1086]: INFO : Ignition 2.19.0 Nov 8 00:23:42.551038 ignition[1086]: INFO : Stage: files Nov 8 00:23:42.555058 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:42.555058 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:42.555058 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:42.566469 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:42.566469 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:42.640339 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:42.644047 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:42.644047 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:42.640855 unknown[1086]: wrote ssh authorized keys file for user: core Nov 8 00:23:42.657256 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:42.661650 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:42.711402 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:23:42.766770 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:23:43.123978 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:23:44.388507 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:44.388507 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:23:44.397535 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:44.412822 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:44.416011 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:44.420186 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:44.424061 ignition[1086]: INFO : files: files passed Nov 8 00:23:44.425645 ignition[1086]: INFO : Ignition finished successfully Nov 8 00:23:44.429274 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:44.438850 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:44.444879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:44.451021 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:44.451142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:44.471132 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.471132 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.478768 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.475160 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:44.482016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:44.495889 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:44.522309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:44.522400 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:44.530282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:44.535066 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:44.539435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:44.548901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:44.562865 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:44.573920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:44.587543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:44.592975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:44.595973 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:44.600631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:44.600793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:44.605967 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:44.609786 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:44.614490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:44.620816 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:44.628078 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:44.630748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:44.635423 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:44.640360 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:44.645480 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:44.649839 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:44.655239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:44.657547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:44.660466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:44.664662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:44.667431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:44.669525 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:44.672427 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:44.672591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:44.686773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:44.686930 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:44.692383 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:44.692532 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:44.701289 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:23:44.701464 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:44.714176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:44.721961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:44.726431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:44.728984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:44.734687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:44.739724 ignition[1139]: INFO : Ignition 2.19.0 Nov 8 00:23:44.739724 ignition[1139]: INFO : Stage: umount Nov 8 00:23:44.739724 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.739724 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:44.739724 ignition[1139]: INFO : umount: umount passed Nov 8 00:23:44.739724 ignition[1139]: INFO : Ignition finished successfully Nov 8 00:23:44.734857 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:44.743611 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:44.743755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:44.749752 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:44.750040 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:44.754523 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:44.754577 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:44.756951 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:23:44.756998 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:23:44.761978 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:44.787014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:44.789555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:44.794724 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:44.796741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:44.801079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:44.807082 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:44.809072 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:44.812947 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:44.813008 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:44.817404 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:44.819403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:44.827972 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:44.828059 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:44.834902 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:44.834978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:44.839582 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:44.846653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:44.850228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:44.850887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:44.850985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:44.851211 systemd-networkd[900]: eth0: DHCPv6 lease lost Nov 8 00:23:44.862933 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:44.863043 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:44.866450 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:44.866547 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:44.873518 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:44.873577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:44.891858 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:44.895077 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:44.895138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:44.899215 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:44.899262 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:44.908331 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:44.908381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:44.915674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:44.915736 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:44.920690 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:44.943386 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:44.943555 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:44.948592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:44.948636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:44.953646 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:44.953687 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:44.958228 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:44.958286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:44.971494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:44.971565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:44.982120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:44.984278 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: Data path switched from VF: enP26209s1 Nov 8 00:23:44.982185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:44.992956 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:44.995413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:44.998086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:45.007116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:23:45.007177 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:45.012331 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:45.012375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:45.020357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:45.023569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:45.033396 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:45.033538 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:45.038291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:45.038381 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:45.360283 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:45.360464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:45.366870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:45.376759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:45.380165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:45.391890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:45.827065 systemd[1]: Switching root. Nov 8 00:23:45.887765 systemd-journald[176]: Journal stopped Nov 8 00:23:35.138808 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:35.138850 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.138865 kernel: BIOS-provided physical RAM map: Nov 8 00:23:35.138877 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:23:35.138889 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:23:35.138901 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:23:35.138915 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:23:35.138933 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 8 00:23:35.138947 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 8 00:23:35.138961 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:23:35.138975 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:23:35.138989 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:23:35.139003 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:23:35.139015 kernel: NX (Execute Disable) protection: active Nov 8 00:23:35.139035 kernel: APIC: Static calls initialized Nov 8 00:23:35.139050 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:23:35.139066 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 8 00:23:35.139081 kernel: SMBIOS 3.1.0 present. Nov 8 00:23:35.139095 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 8 00:23:35.139110 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:23:35.139126 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:23:35.139138 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 8 00:23:35.139152 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:23:35.139166 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:23:35.139180 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:23:35.139755 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:35.139772 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:35.139786 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:23:35.139800 kernel: tsc: Detected 2593.904 MHz processor Nov 8 00:23:35.139813 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:35.139826 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:35.139839 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:23:35.139852 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:23:35.139870 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:35.139883 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:23:35.139895 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:23:35.139908 kernel: Using GB pages for direct mapping Nov 8 00:23:35.139921 kernel: Secure boot disabled Nov 8 00:23:35.139934 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:35.139947 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:23:35.139966 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.139982 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.139996 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 8 00:23:35.140010 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:23:35.140024 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140038 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140052 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140069 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140083 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140096 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140110 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:35.140124 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:23:35.140138 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 8 00:23:35.140151 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:23:35.140165 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:23:35.140182 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:23:35.140197 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:23:35.140211 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:23:35.140224 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:23:35.140249 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:23:35.140263 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 8 00:23:35.140277 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:23:35.140291 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:23:35.140304 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:23:35.140321 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:23:35.140335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:23:35.140349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:23:35.140362 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:23:35.140376 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:23:35.140390 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:23:35.140404 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:23:35.140418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:23:35.140432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:23:35.140448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:23:35.140462 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:23:35.140476 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:23:35.140489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:23:35.140503 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:23:35.140517 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:23:35.140530 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:23:35.140544 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:23:35.140558 kernel: Zone ranges: Nov 8 00:23:35.140575 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:35.140588 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:23:35.140602 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:35.140615 kernel: Movable zone start for each node Nov 8 00:23:35.140629 kernel: Early memory node ranges Nov 8 00:23:35.140643 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:23:35.140656 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:23:35.140670 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:23:35.140684 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:35.140700 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:23:35.140714 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:35.140728 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:23:35.140742 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 8 00:23:35.140755 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:23:35.140769 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:23:35.140783 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:35.140796 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:35.140810 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:35.140826 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:23:35.140840 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:23:35.140854 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:23:35.140868 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:23:35.140882 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:35.140896 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:23:35.140910 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:23:35.140923 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:23:35.140937 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:23:35.140953 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:23:35.140967 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:23:35.140982 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.140997 kernel: random: crng init done Nov 8 00:23:35.141010 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:23:35.141024 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:23:35.141037 kernel: Fallback order for Node 0: 0 Nov 8 00:23:35.141051 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 8 00:23:35.141067 kernel: Policy zone: Normal Nov 8 00:23:35.141091 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:35.141106 kernel: software IO TLB: area num 2. Nov 8 00:23:35.141124 kernel: Memory: 8069608K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 317592K reserved, 0K cma-reserved) Nov 8 00:23:35.141139 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:23:35.141154 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:35.141168 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:35.141183 kernel: Dynamic Preempt: voluntary Nov 8 00:23:35.141198 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:35.141214 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:35.141287 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:23:35.141327 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:35.141340 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:35.141352 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:35.141366 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:35.141381 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:23:35.141398 kernel: Using NULL legacy PIC Nov 8 00:23:35.141412 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:23:35.141423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:35.141436 kernel: Console: colour dummy device 80x25 Nov 8 00:23:35.141451 kernel: printk: console [tty1] enabled Nov 8 00:23:35.141465 kernel: printk: console [ttyS0] enabled Nov 8 00:23:35.141479 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:23:35.141491 kernel: ACPI: Core revision 20230628 Nov 8 00:23:35.141505 kernel: Failed to register legacy timer interrupt Nov 8 00:23:35.141519 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:35.141534 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:23:35.141555 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:23:35.141569 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:23:35.141581 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:23:35.141594 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:23:35.141608 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:23:35.141622 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:23:35.141634 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:23:35.141648 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 8 00:23:35.141666 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:23:35.141680 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:23:35.141694 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:35.141707 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:35.141722 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:35.141737 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:23:35.141752 kernel: RETBleed: Vulnerable Nov 8 00:23:35.141767 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:23:35.141781 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:35.141796 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:35.141814 kernel: active return thunk: its_return_thunk Nov 8 00:23:35.141829 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:23:35.141843 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:35.141858 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:35.141873 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:35.141887 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:23:35.141902 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:23:35.141917 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:23:35.141932 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:35.141947 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:23:35.141962 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:23:35.141980 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:23:35.141995 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:23:35.142010 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:35.142025 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:35.142040 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:35.142054 kernel: landlock: Up and running. Nov 8 00:23:35.142067 kernel: SELinux: Initializing. Nov 8 00:23:35.142081 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.142095 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.142111 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:23:35.142125 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142144 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142159 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:35.142174 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:23:35.142194 kernel: signal: max sigframe size: 3632 Nov 8 00:23:35.142208 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:35.142224 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:35.142252 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:23:35.142266 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:35.142279 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:35.142297 kernel: .... node #0, CPUs: #1 Nov 8 00:23:35.142311 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:23:35.142326 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:23:35.142341 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:23:35.142355 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:35.142369 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 8 00:23:35.142384 kernel: devtmpfs: initialized Nov 8 00:23:35.142398 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:35.142415 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:23:35.142429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:35.142443 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:23:35.142457 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:35.142471 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:35.142485 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:35.142500 kernel: audit: type=2000 audit(1762561413.031:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:35.142514 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:35.142528 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:35.142545 kernel: cpuidle: using governor menu Nov 8 00:23:35.142559 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:35.142573 kernel: dca service started, version 1.12.1 Nov 8 00:23:35.142587 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:23:35.142601 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:35.142616 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:35.142629 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:35.142643 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:35.142658 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:35.142674 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:35.142688 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:35.142703 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:35.142716 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:35.142731 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:35.142745 kernel: ACPI: Interpreter enabled Nov 8 00:23:35.142759 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:23:35.142774 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:35.142788 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:35.142804 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:23:35.142819 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:23:35.142833 kernel: iommu: Default domain type: Translated Nov 8 00:23:35.142847 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:35.142861 kernel: efivars: Registered efivars operations Nov 8 00:23:35.142875 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:35.142889 kernel: PCI: System does not support PCI Nov 8 00:23:35.142903 kernel: vgaarb: loaded Nov 8 00:23:35.142916 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:23:35.142933 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:35.142947 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:35.142960 kernel: pnp: PnP ACPI init Nov 8 00:23:35.142974 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:23:35.142988 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:35.143002 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:35.143014 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:23:35.143028 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:23:35.143043 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:35.143060 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:23:35.143073 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:23:35.143104 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:23:35.143119 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.143132 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:35.143144 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:35.143158 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:35.143173 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:35.143186 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:23:35.143209 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Nov 8 00:23:35.143223 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:23:35.143268 kernel: Initialise system trusted keyrings Nov 8 00:23:35.143283 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:23:35.143298 kernel: Key type asymmetric registered Nov 8 00:23:35.143313 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:35.143328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:35.143343 kernel: io scheduler mq-deadline registered Nov 8 00:23:35.143358 kernel: io scheduler kyber registered Nov 8 00:23:35.143377 kernel: io scheduler bfq registered Nov 8 00:23:35.143392 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:35.143407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:35.143422 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:35.143437 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:23:35.143451 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:23:35.143639 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:23:35.143760 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:23:34 UTC (1762561414) Nov 8 00:23:35.143874 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:23:35.143892 kernel: intel_pstate: CPU model not supported Nov 8 00:23:35.143906 kernel: efifb: probing for efifb Nov 8 00:23:35.143920 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:23:35.143935 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:23:35.143949 kernel: efifb: scrolling: redraw Nov 8 00:23:35.143963 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:23:35.143977 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:35.143991 kernel: fb0: EFI VGA frame buffer device Nov 8 00:23:35.144008 kernel: pstore: Using crash dump compression: deflate Nov 8 00:23:35.144023 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:23:35.144036 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:35.144051 kernel: Segment Routing with IPv6 Nov 8 00:23:35.144065 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:35.144079 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:35.144093 kernel: Key type dns_resolver registered Nov 8 00:23:35.144107 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:35.144121 kernel: sched_clock: Marking stable (1051003600, 70623400)->(1423534800, -301907800) Nov 8 00:23:35.144138 kernel: registered taskstats version 1 Nov 8 00:23:35.144152 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:35.144166 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:35.144180 kernel: Key type .fscrypt registered Nov 8 00:23:35.144193 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:35.144207 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:35.144221 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:35.144249 kernel: ima: No architecture policies found Nov 8 00:23:35.144262 kernel: clk: Disabling unused clocks Nov 8 00:23:35.144278 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:35.144291 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:35.144305 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:35.144328 kernel: Run /init as init process Nov 8 00:23:35.144341 kernel: with arguments: Nov 8 00:23:35.144355 kernel: /init Nov 8 00:23:35.144369 kernel: with environment: Nov 8 00:23:35.144385 kernel: HOME=/ Nov 8 00:23:35.144399 kernel: TERM=linux Nov 8 00:23:35.144420 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:35.144439 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:35.144455 systemd[1]: Detected architecture x86-64. Nov 8 00:23:35.144470 systemd[1]: Running in initrd. Nov 8 00:23:35.144486 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:35.144501 systemd[1]: Hostname set to . Nov 8 00:23:35.144518 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:35.144536 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:35.144553 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:35.144569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:35.144586 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:35.144602 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:35.144617 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:35.144634 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:35.144654 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:35.144670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:35.144686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:35.144702 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:35.144718 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:35.144734 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:35.144750 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:35.144766 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:35.144785 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:35.144801 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:35.144817 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:35.144833 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:35.144849 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:35.144865 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:35.144881 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:35.144897 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:35.144913 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:35.144932 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:35.144948 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:35.144964 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:35.144980 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:35.144997 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:35.145037 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:23:35.145076 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:35.145093 systemd-journald[176]: Journal started Nov 8 00:23:35.145126 systemd-journald[176]: Runtime Journal (/run/log/journal/f44d69a756bd4d9180e56da1d900efca) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:35.141578 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:23:35.156269 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:35.159716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:35.169734 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:35.176683 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:35.185407 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:35.185839 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:35.190516 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:23:35.196032 kernel: Bridge firewalling registered Nov 8 00:23:35.193738 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:35.205441 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:35.216423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:35.222252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:35.238407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:35.244739 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:35.262432 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:35.267008 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:35.269769 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:35.279159 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:35.285567 dracut-cmdline[205]: dracut-dracut-053 Nov 8 00:23:35.291299 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:35.305659 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:35.318054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:35.335874 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:35.362818 systemd-resolved[218]: Positive Trust Anchors: Nov 8 00:23:35.362841 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:35.362898 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:35.388873 systemd-resolved[218]: Defaulting to hostname 'linux'. Nov 8 00:23:35.392463 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:35.397975 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:35.411250 kernel: SCSI subsystem initialized Nov 8 00:23:35.422251 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:35.434260 kernel: iscsi: registered transport (tcp) Nov 8 00:23:35.455090 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:35.455194 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:35.492540 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:35.502423 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:35.531050 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:35.531160 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:35.534154 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:35.575260 kernel: raid6: avx512x4 gen() 18449 MB/s Nov 8 00:23:35.594247 kernel: raid6: avx512x2 gen() 18455 MB/s Nov 8 00:23:35.613248 kernel: raid6: avx512x1 gen() 18426 MB/s Nov 8 00:23:35.632249 kernel: raid6: avx2x4 gen() 18370 MB/s Nov 8 00:23:35.651242 kernel: raid6: avx2x2 gen() 18256 MB/s Nov 8 00:23:35.671370 kernel: raid6: avx2x1 gen() 13989 MB/s Nov 8 00:23:35.671423 kernel: raid6: using algorithm avx512x2 gen() 18455 MB/s Nov 8 00:23:35.693001 kernel: raid6: .... xor() 27599 MB/s, rmw enabled Nov 8 00:23:35.693040 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:23:35.716265 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:35.868256 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:35.878164 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:35.887394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:35.900404 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 8 00:23:35.905010 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:35.924410 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:35.937771 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Nov 8 00:23:35.964405 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:35.977490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:36.019893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:36.037491 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:36.066991 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:36.073818 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:36.079664 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:36.085193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:36.096415 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:36.115607 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:36.129108 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:36.132064 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:36.129316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:36.140533 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:36.146344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:36.147328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.153785 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.169349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.182963 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:36.183031 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:36.184489 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:23:36.191467 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:36.192610 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.207899 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:23:36.209894 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:36.216640 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:23:36.231340 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:23:36.231396 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Nov 8 00:23:36.231418 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:23:36.245937 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:23:36.247582 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:23:36.247645 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:23:36.247665 kernel: scsi host0: storvsc_host_t Nov 8 00:23:36.250703 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:36.253430 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:23:36.253472 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 8 00:23:36.253511 kernel: scsi host1: storvsc_host_t Nov 8 00:23:36.254247 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Nov 8 00:23:36.270270 kernel: PTP clock support registered Nov 8 00:23:36.325248 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:23:36.325330 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:23:36.329646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:36.345053 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:23:36.345119 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:23:36.347031 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:23:36.348558 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:36.707813 systemd-resolved[218]: Clock change detected. Flushing caches. Nov 8 00:23:36.724102 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:23:36.724390 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:36.725729 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:23:36.750785 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:23:36.751104 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:23:36.751616 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:36.761867 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:23:36.762107 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:23:36.762282 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:23:36.762411 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:36.767721 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:23:36.805721 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: VF slot 1 added Nov 8 00:23:36.816400 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:23:36.816469 kernel: hv_pci 2848a79f-6661-4a74-bcb1-590f89001639: PCI VMBus probing: Using version 0x10004 Nov 8 00:23:36.822877 kernel: hv_pci 2848a79f-6661-4a74-bcb1-590f89001639: PCI host bridge to bus 6661:00 Nov 8 00:23:36.823167 kernel: pci_bus 6661:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:23:36.825879 kernel: pci_bus 6661:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:23:36.831010 kernel: pci 6661:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:23:36.834731 kernel: pci 6661:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:36.838039 kernel: pci 6661:00:02.0: enabling Extended Tags Nov 8 00:23:36.849772 kernel: pci 6661:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6661:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:23:36.855686 kernel: pci_bus 6661:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:23:36.856017 kernel: pci 6661:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:37.030822 kernel: mlx5_core 6661:00:02.0: enabling device (0000 -> 0002) Nov 8 00:23:37.034717 kernel: mlx5_core 6661:00:02.0: firmware version: 14.30.5006 Nov 8 00:23:37.250000 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (447) Nov 8 00:23:37.253885 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: VF registering: eth1 Nov 8 00:23:37.257741 kernel: mlx5_core 6661:00:02.0 eth1: joined to eth0 Nov 8 00:23:37.257976 kernel: mlx5_core 6661:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:23:37.278726 kernel: mlx5_core 6661:00:02.0 enP26209s1: renamed from eth1 Nov 8 00:23:37.280118 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:37.299725 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (454) Nov 8 00:23:37.316541 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:23:37.324718 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:23:37.331390 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:23:37.347944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:37.360911 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:23:37.373720 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:37.383726 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:38.394941 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:38.395016 disk-uuid[603]: The operation has completed successfully. Nov 8 00:23:38.490128 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:38.490245 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:38.502886 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:38.508453 sh[716]: Success Nov 8 00:23:38.534723 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:23:38.873122 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:38.898871 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:38.905289 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:38.923429 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:38.923512 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:38.926626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:38.929131 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:38.931493 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:39.246175 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:39.253130 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:39.268970 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:39.277840 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:39.304781 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:39.304858 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:39.307114 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:39.344738 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:39.355946 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:39.361088 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:39.371511 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:39.374167 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:39.388889 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:39.395409 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:39.426246 systemd-networkd[900]: lo: Link UP Nov 8 00:23:39.426256 systemd-networkd[900]: lo: Gained carrier Nov 8 00:23:39.428459 systemd-networkd[900]: Enumeration completed Nov 8 00:23:39.428777 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:39.436996 systemd[1]: Reached target network.target - Network. Nov 8 00:23:39.438123 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:39.438129 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:39.498724 kernel: mlx5_core 6661:00:02.0 enP26209s1: Link up Nov 8 00:23:39.528740 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: Data path switched to VF: enP26209s1 Nov 8 00:23:39.529608 systemd-networkd[900]: enP26209s1: Link UP Nov 8 00:23:39.529757 systemd-networkd[900]: eth0: Link UP Nov 8 00:23:39.529964 systemd-networkd[900]: eth0: Gained carrier Nov 8 00:23:39.529979 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:39.538489 systemd-networkd[900]: enP26209s1: Gained carrier Nov 8 00:23:39.568779 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:40.208641 ignition[899]: Ignition 2.19.0 Nov 8 00:23:40.208653 ignition[899]: Stage: fetch-offline Nov 8 00:23:40.208718 ignition[899]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.208729 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.208840 ignition[899]: parsed url from cmdline: "" Nov 8 00:23:40.208844 ignition[899]: no config URL provided Nov 8 00:23:40.208850 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:40.208862 ignition[899]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:40.208868 ignition[899]: failed to fetch config: resource requires networking Nov 8 00:23:40.209074 ignition[899]: Ignition finished successfully Nov 8 00:23:40.227187 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:40.235864 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:23:40.250577 ignition[909]: Ignition 2.19.0 Nov 8 00:23:40.250588 ignition[909]: Stage: fetch Nov 8 00:23:40.250841 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.250854 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.250963 ignition[909]: parsed url from cmdline: "" Nov 8 00:23:40.250967 ignition[909]: no config URL provided Nov 8 00:23:40.250973 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:40.250980 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:40.250999 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:23:40.359320 ignition[909]: GET result: OK Nov 8 00:23:40.359468 ignition[909]: config has been read from IMDS userdata Nov 8 00:23:40.363500 unknown[909]: fetched base config from "system" Nov 8 00:23:40.359510 ignition[909]: parsing config with SHA512: f842820e8f3007f8e74b0fcc9d574c30073b7d540373a5b8cb2a3a231cd551fbc8469d88ccbeb4b39e22596c1395c3cd79c5611e060ecb40558a29b1db514d65 Nov 8 00:23:40.363507 unknown[909]: fetched base config from "system" Nov 8 00:23:40.365983 ignition[909]: fetch: fetch complete Nov 8 00:23:40.363512 unknown[909]: fetched user config from "azure" Nov 8 00:23:40.365990 ignition[909]: fetch: fetch passed Nov 8 00:23:40.368726 ignition[909]: Ignition finished successfully Nov 8 00:23:40.384136 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:23:40.397004 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:40.418486 ignition[915]: Ignition 2.19.0 Nov 8 00:23:40.418497 ignition[915]: Stage: kargs Nov 8 00:23:40.418745 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.418764 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.423856 ignition[915]: kargs: kargs passed Nov 8 00:23:40.423904 ignition[915]: Ignition finished successfully Nov 8 00:23:40.435323 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:40.445942 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:40.469468 ignition[921]: Ignition 2.19.0 Nov 8 00:23:40.469480 ignition[921]: Stage: disks Nov 8 00:23:40.469714 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:40.469728 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:40.477992 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:40.475387 ignition[921]: disks: disks passed Nov 8 00:23:40.482454 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:40.475434 ignition[921]: Ignition finished successfully Nov 8 00:23:40.491256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:40.494747 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:40.514012 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:40.516647 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:40.527881 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:40.584963 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:23:40.590658 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:40.604806 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:40.703718 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:40.704777 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:40.707344 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:40.742855 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:40.757727 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:23:40.760819 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:40.762631 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:40.768355 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:40.768382 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:40.770263 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:23:40.775660 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:40.785786 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:40.776602 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:40.793019 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:40.796759 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:40.808916 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:41.331186 coreos-metadata[942]: Nov 08 00:23:41.331 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:41.336536 coreos-metadata[942]: Nov 08 00:23:41.335 INFO Fetch successful Nov 8 00:23:41.336536 coreos-metadata[942]: Nov 08 00:23:41.335 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:41.348867 coreos-metadata[942]: Nov 08 00:23:41.346 INFO Fetch successful Nov 8 00:23:41.360863 coreos-metadata[942]: Nov 08 00:23:41.360 INFO wrote hostname ci-4081.3.6-n-036966ce4d to /sysroot/etc/hostname Nov 8 00:23:41.366983 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:41.380841 systemd-networkd[900]: eth0: Gained IPv6LL Nov 8 00:23:41.426206 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:41.472058 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:41.477173 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:41.482116 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:42.376235 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:42.383845 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:42.389183 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:42.405023 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:42.410414 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.433966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:42.442046 ignition[1058]: INFO : Ignition 2.19.0 Nov 8 00:23:42.442046 ignition[1058]: INFO : Stage: mount Nov 8 00:23:42.445605 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:42.445605 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:42.450869 ignition[1058]: INFO : mount: mount passed Nov 8 00:23:42.450869 ignition[1058]: INFO : Ignition finished successfully Nov 8 00:23:42.454652 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:42.462887 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:42.473364 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:42.503722 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1069) Nov 8 00:23:42.507710 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.507747 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:42.512089 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:42.518717 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:42.520897 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:42.551038 ignition[1086]: INFO : Ignition 2.19.0 Nov 8 00:23:42.551038 ignition[1086]: INFO : Stage: files Nov 8 00:23:42.555058 ignition[1086]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:42.555058 ignition[1086]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:42.555058 ignition[1086]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:42.566469 ignition[1086]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:42.566469 ignition[1086]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:42.640339 ignition[1086]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:42.644047 ignition[1086]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:42.644047 ignition[1086]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:42.640855 unknown[1086]: wrote ssh authorized keys file for user: core Nov 8 00:23:42.657256 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:42.661650 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:42.711402 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:23:42.766770 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:42.774552 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Nov 8 00:23:43.123978 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:23:44.388507 ignition[1086]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Nov 8 00:23:44.388507 ignition[1086]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:23:44.397535 ignition[1086]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:23:44.402157 ignition[1086]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:44.412822 ignition[1086]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:44.416011 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:44.420186 ignition[1086]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:44.424061 ignition[1086]: INFO : files: files passed Nov 8 00:23:44.425645 ignition[1086]: INFO : Ignition finished successfully Nov 8 00:23:44.429274 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:44.438850 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:44.444879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:44.451021 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:44.451142 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:44.471132 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.471132 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.478768 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:44.475160 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:44.482016 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:44.495889 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:44.522309 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:44.522400 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:44.530282 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:44.535066 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:44.539435 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:44.548901 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:44.562865 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:44.573920 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:44.587543 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:44.592975 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:44.595973 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:44.600631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:44.600793 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:44.605967 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:44.609786 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:44.614490 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:44.620816 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:44.628078 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:44.630748 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:44.635423 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:44.640360 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:44.645480 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:44.649839 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:44.655239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:44.657547 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:44.660466 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:44.664662 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:44.667431 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:44.669525 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:44.672427 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:44.672591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:44.686773 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:44.686930 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:44.692383 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:44.692532 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:44.701289 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:23:44.701464 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:44.714176 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:44.721961 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:44.726431 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:44.728984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:44.734687 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:44.739724 ignition[1139]: INFO : Ignition 2.19.0 Nov 8 00:23:44.739724 ignition[1139]: INFO : Stage: umount Nov 8 00:23:44.739724 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:44.739724 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:44.739724 ignition[1139]: INFO : umount: umount passed Nov 8 00:23:44.739724 ignition[1139]: INFO : Ignition finished successfully Nov 8 00:23:44.734857 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:44.743611 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:44.743755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:44.749752 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:44.750040 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:44.754523 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:44.754577 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:44.756951 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:23:44.756998 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:23:44.761978 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:44.787014 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:44.789555 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:44.794724 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:44.796741 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:44.801079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:44.807082 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:44.809072 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:44.812947 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:44.813008 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:44.817404 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:44.819403 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:44.827972 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:44.828059 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:44.834902 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:44.834978 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:44.839582 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:44.846653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:44.850228 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:44.850887 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:44.850985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:44.851211 systemd-networkd[900]: eth0: DHCPv6 lease lost Nov 8 00:23:44.862933 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:44.863043 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:44.866450 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:44.866547 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:44.873518 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:44.873577 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:44.891858 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:44.895077 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:44.895138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:44.899215 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:44.899262 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:44.908331 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:44.908381 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:44.915674 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:44.915736 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:44.920690 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:44.943386 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:44.943555 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:44.948592 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:44.948636 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:44.953646 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:44.953687 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:44.958228 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:44.958286 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:44.971494 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:44.971565 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:44.982120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:44.984278 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: Data path switched from VF: enP26209s1 Nov 8 00:23:44.982185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:44.992956 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:44.995413 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:44.998086 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:45.007116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 8 00:23:45.007177 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:45.012331 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:45.012375 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:45.020357 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:45.023569 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:45.033396 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:45.033538 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:45.038291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:45.038381 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:45.360283 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:45.360464 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:45.366870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:45.376759 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:45.380165 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:45.391890 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:45.827065 systemd[1]: Switching root. Nov 8 00:23:45.887765 systemd-journald[176]: Journal stopped Nov 8 00:23:50.132167 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 8 00:23:50.132205 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:23:50.132222 kernel: SELinux: policy capability open_perms=1 Nov 8 00:23:50.132235 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:23:50.132248 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:23:50.132261 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:23:50.132275 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:23:50.132291 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:23:50.132305 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:23:50.132318 kernel: audit: type=1403 audit(1762561426.876:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:23:50.132334 systemd[1]: Successfully loaded SELinux policy in 128.794ms. Nov 8 00:23:50.132350 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.835ms. Nov 8 00:23:50.132366 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:50.132381 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:50.132400 systemd[1]: Detected architecture x86-64. Nov 8 00:23:50.132415 systemd[1]: Detected first boot. Nov 8 00:23:50.132431 systemd[1]: Hostname set to . Nov 8 00:23:50.132446 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:50.132461 zram_generator::config[1182]: No configuration found. Nov 8 00:23:50.132480 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:23:50.132495 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:23:50.132510 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:23:50.132525 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:23:50.132541 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:23:50.132557 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:23:50.132573 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:23:50.132591 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:23:50.132607 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:23:50.132623 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:23:50.132639 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:23:50.132654 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:23:50.132671 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:50.132687 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:50.140928 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:23:50.140962 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:23:50.140981 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:23:50.141000 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:50.141017 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:23:50.141034 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:50.141052 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:23:50.141074 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:23:50.141092 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:50.141113 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:23:50.141131 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:50.141148 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:50.141166 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:50.141184 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:50.141201 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:23:50.141219 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:23:50.141240 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:50.141258 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:50.141277 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:50.141295 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:23:50.141313 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:23:50.141334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:23:50.141352 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:23:50.141371 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:50.141389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:23:50.141407 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:23:50.141425 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:23:50.141444 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:23:50.141463 systemd[1]: Reached target machines.target - Containers. Nov 8 00:23:50.141483 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:23:50.141504 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:50.141523 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:50.141541 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:23:50.141559 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:50.141577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:50.141595 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:50.141613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:23:50.141631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:50.141653 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:23:50.141671 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:23:50.141689 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:23:50.141716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:23:50.141735 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:23:50.141753 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:50.141771 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:50.141789 kernel: fuse: init (API version 7.39) Nov 8 00:23:50.141808 kernel: loop: module loaded Nov 8 00:23:50.141825 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:23:50.141844 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:23:50.141862 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:50.141880 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:23:50.141899 systemd[1]: Stopped verity-setup.service. Nov 8 00:23:50.141918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:50.141962 systemd-journald[1288]: Collecting audit messages is disabled. Nov 8 00:23:50.142002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:23:50.142021 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:23:50.142039 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:23:50.142058 systemd-journald[1288]: Journal started Nov 8 00:23:50.142096 systemd-journald[1288]: Runtime Journal (/run/log/journal/f9e33f77a22d497aa4cc535a375e162e) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:49.345170 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:23:49.527067 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:23:49.527506 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:23:50.148730 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:50.152408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:23:50.155311 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:23:50.158317 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:23:50.162017 kernel: ACPI: bus type drm_connector registered Nov 8 00:23:50.162604 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:23:50.166076 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:50.169288 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:23:50.169452 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:23:50.172648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:50.173177 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:50.176328 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:50.176559 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:50.179385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:50.179545 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:50.182991 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:23:50.183143 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:23:50.185900 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:50.186056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:50.189033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:50.192621 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:23:50.196502 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:23:50.209682 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:23:50.218800 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:23:50.225425 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:23:50.228360 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:23:50.228492 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:50.232370 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:23:50.239922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:23:50.244053 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:23:50.246382 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:50.259245 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:23:50.264404 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:23:50.267828 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:50.272941 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:23:50.275651 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:50.278240 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:50.288892 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:23:50.295664 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:50.301369 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:23:50.304255 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:23:50.308623 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:23:50.331963 systemd-journald[1288]: Time spent on flushing to /var/log/journal/f9e33f77a22d497aa4cc535a375e162e is 48.180ms for 957 entries. Nov 8 00:23:50.331963 systemd-journald[1288]: System Journal (/var/log/journal/f9e33f77a22d497aa4cc535a375e162e) is 8.0M, max 2.6G, 2.6G free. Nov 8 00:23:50.407576 systemd-journald[1288]: Received client request to flush runtime journal. Nov 8 00:23:50.407617 kernel: loop0: detected capacity change from 0 to 31056 Nov 8 00:23:50.349404 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:23:50.353741 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:23:50.365162 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:23:50.404774 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:50.409459 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:23:50.422965 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:23:50.436556 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 8 00:23:50.436584 systemd-tmpfiles[1319]: ACLs are not supported, ignoring. Nov 8 00:23:50.440342 udevadm[1331]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 8 00:23:50.448786 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:50.453155 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:50.468406 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:23:50.472361 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:23:50.475796 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:23:50.614417 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:23:50.628379 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:50.643995 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Nov 8 00:23:50.644021 systemd-tmpfiles[1338]: ACLs are not supported, ignoring. Nov 8 00:23:50.648148 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:50.718724 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:23:50.782728 kernel: loop1: detected capacity change from 0 to 140768 Nov 8 00:23:51.244802 kernel: loop2: detected capacity change from 0 to 142488 Nov 8 00:23:51.257523 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:23:51.269862 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:51.293629 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Nov 8 00:23:51.443476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:51.455887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:51.520311 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:23:51.540918 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:23:51.616393 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:23:51.643762 kernel: loop3: detected capacity change from 0 to 229808 Nov 8 00:23:51.659736 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:23:51.692718 kernel: hv_vmbus: registering driver hyperv_fb Nov 8 00:23:51.692798 kernel: loop4: detected capacity change from 0 to 31056 Nov 8 00:23:51.702624 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 8 00:23:51.702787 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 8 00:23:51.707249 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:23:51.709718 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:51.720796 kernel: hv_vmbus: registering driver hv_balloon Nov 8 00:23:51.729827 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 8 00:23:51.738840 kernel: loop5: detected capacity change from 0 to 140768 Nov 8 00:23:51.872413 systemd-networkd[1354]: lo: Link UP Nov 8 00:23:51.872898 systemd-networkd[1354]: lo: Gained carrier Nov 8 00:23:51.878754 kernel: loop6: detected capacity change from 0 to 142488 Nov 8 00:23:51.880598 systemd-networkd[1354]: Enumeration completed Nov 8 00:23:51.881978 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:51.888422 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:51.888779 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:51.894983 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:23:51.930718 kernel: loop7: detected capacity change from 0 to 229808 Nov 8 00:23:51.943448 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:51.960722 kernel: mlx5_core 6661:00:02.0 enP26209s1: Link up Nov 8 00:23:51.961002 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1362) Nov 8 00:23:51.961310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:51.961484 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:51.973969 (sd-merge)[1388]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 8 00:23:51.991982 kernel: hv_netvsc 000d3ab3-9c71-000d-3ab3-9c71000d3ab3 eth0: Data path switched to VF: enP26209s1 Nov 8 00:23:51.974641 (sd-merge)[1388]: Merged extensions into '/usr'. Nov 8 00:23:51.976899 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:51.999287 systemd-networkd[1354]: enP26209s1: Link UP Nov 8 00:23:51.999452 systemd-networkd[1354]: eth0: Link UP Nov 8 00:23:51.999457 systemd-networkd[1354]: eth0: Gained carrier Nov 8 00:23:51.999483 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:52.013520 systemd-networkd[1354]: enP26209s1: Gained carrier Nov 8 00:23:52.035473 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:23:52.035762 systemd[1]: Reloading... Nov 8 00:23:52.042820 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:52.246767 zram_generator::config[1465]: No configuration found. Nov 8 00:23:52.321742 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 8 00:23:52.477871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:52.562327 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:52.566481 systemd[1]: Reloading finished in 530 ms. Nov 8 00:23:52.597071 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:23:52.600811 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:52.627985 systemd[1]: Starting ensure-sysext.service... Nov 8 00:23:52.632895 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:23:52.643182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:52.646278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:52.646365 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:52.649576 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:52.654883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:52.658575 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:23:52.671830 systemd[1]: Reloading requested from client PID 1525 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:23:52.671851 systemd[1]: Reloading... Nov 8 00:23:52.707001 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:23:52.708124 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:23:52.709680 systemd-tmpfiles[1527]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:23:52.710210 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Nov 8 00:23:52.710362 systemd-tmpfiles[1527]: ACLs are not supported, ignoring. Nov 8 00:23:52.732180 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:52.732200 systemd-tmpfiles[1527]: Skipping /boot Nov 8 00:23:52.752774 zram_generator::config[1561]: No configuration found. Nov 8 00:23:52.752033 systemd-tmpfiles[1527]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:52.752043 systemd-tmpfiles[1527]: Skipping /boot Nov 8 00:23:52.905891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:52.987078 systemd[1]: Reloading finished in 314 ms. Nov 8 00:23:53.019378 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:23:53.023349 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:53.027114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:53.043039 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:53.050116 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:23:53.056815 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:23:53.074017 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:23:53.084983 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:53.092020 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:23:53.101126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.101395 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:53.117108 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:53.126016 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:53.136571 lvm[1631]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:53.139061 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:53.144067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:53.144807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.147844 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:53.148060 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:53.155238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:53.155411 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:53.172925 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:53.173588 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:53.178998 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:23:53.188436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:53.194136 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.194494 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:53.204931 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:23:53.211492 lvm[1655]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:53.220766 augenrules[1657]: No rules Nov 8 00:23:53.220133 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:53.226343 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:53.239160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:53.241881 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:53.242182 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.244171 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:53.247575 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:53.247771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:53.252509 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:53.252675 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:53.258559 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:53.258960 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:53.263955 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:23:53.270102 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:23:53.283929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.284210 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:53.290061 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:53.294409 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:53.301781 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:53.310985 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:53.313512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:53.313806 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:23:53.316818 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:53.318754 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:23:53.321371 systemd-resolved[1638]: Positive Trust Anchors: Nov 8 00:23:53.321389 systemd-resolved[1638]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:53.321426 systemd-resolved[1638]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:53.323184 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:53.323389 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:53.327229 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:53.327396 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:53.330646 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:53.330942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:53.334256 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:53.334413 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:53.339789 systemd[1]: Finished ensure-sysext.service. Nov 8 00:23:53.341840 systemd-resolved[1638]: Using system hostname 'ci-4081.3.6-n-036966ce4d'. Nov 8 00:23:53.345653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:53.350337 systemd[1]: Reached target network.target - Network. Nov 8 00:23:53.352447 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:53.355258 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:53.355347 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:53.698288 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:23:53.702748 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:23:53.987899 systemd-networkd[1354]: eth0: Gained IPv6LL Nov 8 00:23:53.990768 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:23:53.994207 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:23:56.196715 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:23:56.218663 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:23:56.227930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:23:56.263236 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:23:56.267902 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:56.272410 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:23:56.276499 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:23:56.281637 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:23:56.285374 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:23:56.289814 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:23:56.296204 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:23:56.296253 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:56.299246 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:56.303919 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:23:56.307742 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:23:56.321669 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:23:56.325787 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:23:56.329715 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:56.332217 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:56.334295 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:56.334329 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:56.340833 systemd[1]: Starting chronyd.service - NTP client/server... Nov 8 00:23:56.346823 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:23:56.354886 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:23:56.366968 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:23:56.371824 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:23:56.384980 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:23:56.387581 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:23:56.387638 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 8 00:23:56.390641 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 8 00:23:56.395998 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 8 00:23:56.402836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:56.409635 (chronyd)[1688]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 8 00:23:56.411004 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:23:56.414487 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:23:56.423864 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:23:56.428259 jq[1692]: false Nov 8 00:23:56.429498 KVP[1696]: KVP starting; pid is:1696 Nov 8 00:23:56.434451 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:23:56.439937 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:23:56.447669 chronyd[1707]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 8 00:23:56.454449 kernel: hv_utils: KVP IC version 4.0 Nov 8 00:23:56.450960 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:23:56.450590 KVP[1696]: KVP LIC Version: 3.1 Nov 8 00:23:56.453569 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:23:56.454167 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:23:56.456770 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:23:56.479786 chronyd[1707]: Timezone right/UTC failed leap second check, ignoring Nov 8 00:23:56.480021 chronyd[1707]: Loaded seccomp filter (level 2) Nov 8 00:23:56.505708 jq[1711]: true Nov 8 00:23:56.505983 extend-filesystems[1693]: Found loop4 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found loop5 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found loop6 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found loop7 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda1 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda2 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda3 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found usr Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda4 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda6 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda7 Nov 8 00:23:56.505983 extend-filesystems[1693]: Found sda9 Nov 8 00:23:56.505983 extend-filesystems[1693]: Checking size of /dev/sda9 Nov 8 00:23:56.487147 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:23:56.673739 extend-filesystems[1693]: Old size kept for /dev/sda9 Nov 8 00:23:56.673739 extend-filesystems[1693]: Found sr0 Nov 8 00:23:56.543123 dbus-daemon[1691]: [system] SELinux support is enabled Nov 8 00:23:56.492452 systemd[1]: Started chronyd.service - NTP client/server. Nov 8 00:23:56.689672 update_engine[1709]: I20251108 00:23:56.596355 1709 main.cc:92] Flatcar Update Engine starting Nov 8 00:23:56.689672 update_engine[1709]: I20251108 00:23:56.625980 1709 update_check_scheduler.cc:74] Next update check in 7m17s Nov 8 00:23:56.508537 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:23:56.508817 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:23:56.511156 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:23:56.691846 jq[1728]: true Nov 8 00:23:56.511372 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:23:56.532124 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:23:56.532333 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:23:56.550550 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:23:56.692454 tar[1724]: linux-amd64/LICENSE Nov 8 00:23:56.692454 tar[1724]: linux-amd64/helm Nov 8 00:23:56.582187 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:23:56.582231 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:23:56.589381 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:23:56.589407 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:23:56.591130 (ntainerd)[1736]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:23:56.596124 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:23:56.596774 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:23:56.607491 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:23:56.621783 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:23:56.637126 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:23:56.713266 coreos-metadata[1690]: Nov 08 00:23:56.713 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:56.721065 coreos-metadata[1690]: Nov 08 00:23:56.717 INFO Fetch successful Nov 8 00:23:56.721065 coreos-metadata[1690]: Nov 08 00:23:56.718 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 8 00:23:56.723795 coreos-metadata[1690]: Nov 08 00:23:56.723 INFO Fetch successful Nov 8 00:23:56.724470 coreos-metadata[1690]: Nov 08 00:23:56.724 INFO Fetching http://168.63.129.16/machine/edb23f14-b1b7-4dbb-872f-426e884907cd/a910e947%2Dde7d%2D46fc%2Dbf2a%2Db4a9547bc048.%5Fci%2D4081.3.6%2Dn%2D036966ce4d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 8 00:23:56.726656 coreos-metadata[1690]: Nov 08 00:23:56.726 INFO Fetch successful Nov 8 00:23:56.727102 coreos-metadata[1690]: Nov 08 00:23:56.726 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:56.742897 coreos-metadata[1690]: Nov 08 00:23:56.740 INFO Fetch successful Nov 8 00:23:56.744573 systemd-logind[1708]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Nov 8 00:23:56.746891 systemd-logind[1708]: New seat seat0. Nov 8 00:23:56.752111 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:23:56.797228 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:23:56.800428 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:23:56.815199 bash[1762]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:23:56.818211 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:23:56.826568 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:23:56.905169 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1774) Nov 8 00:23:57.043774 locksmithd[1748]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:23:57.084300 sshd_keygen[1721]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:23:57.128205 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:23:57.142833 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:23:57.150993 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 8 00:23:57.167529 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:23:57.167814 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:23:57.180986 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:23:57.238449 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 8 00:23:57.254262 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:23:57.266859 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:23:57.278117 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:23:57.284281 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:23:57.720724 tar[1724]: linux-amd64/README.md Nov 8 00:23:57.735620 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:23:57.755989 containerd[1736]: time="2025-11-08T00:23:57.755024400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:23:57.789722 containerd[1736]: time="2025-11-08T00:23:57.789664800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.791559 containerd[1736]: time="2025-11-08T00:23:57.791514400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:57.791559 containerd[1736]: time="2025-11-08T00:23:57.791550500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:23:57.791739 containerd[1736]: time="2025-11-08T00:23:57.791572900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:23:57.793041 containerd[1736]: time="2025-11-08T00:23:57.792879900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:23:57.793041 containerd[1736]: time="2025-11-08T00:23:57.792932300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793041 containerd[1736]: time="2025-11-08T00:23:57.793024400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793041 containerd[1736]: time="2025-11-08T00:23:57.793041800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793312 containerd[1736]: time="2025-11-08T00:23:57.793282000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793312 containerd[1736]: time="2025-11-08T00:23:57.793306400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793406 containerd[1736]: time="2025-11-08T00:23:57.793325800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793406 containerd[1736]: time="2025-11-08T00:23:57.793340500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793483 containerd[1736]: time="2025-11-08T00:23:57.793446400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793724 containerd[1736]: time="2025-11-08T00:23:57.793682900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793864 containerd[1736]: time="2025-11-08T00:23:57.793839600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:23:57.793864 containerd[1736]: time="2025-11-08T00:23:57.793860300Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:23:57.793987 containerd[1736]: time="2025-11-08T00:23:57.793965600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:23:57.794044 containerd[1736]: time="2025-11-08T00:23:57.794025200Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.816982500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817067700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817091500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817113900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817134900Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817346400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817668600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817825500Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817846000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817865300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817887200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817906300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817924100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818365 containerd[1736]: time="2025-11-08T00:23:57.817944100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.817964300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.817983100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818007500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818027000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818053200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818073300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818090400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818123300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818148100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818170400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818187300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818204900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818223000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.818968 containerd[1736]: time="2025-11-08T00:23:57.818269300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818289200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818305200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818322900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818391500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818434500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818455100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818471900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818525300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818549300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818567800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818585700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818600400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818617900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:23:57.819493 containerd[1736]: time="2025-11-08T00:23:57.818637600Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:23:57.819986 containerd[1736]: time="2025-11-08T00:23:57.818652100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:23:57.820033 containerd[1736]: time="2025-11-08T00:23:57.819035800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:23:57.820033 containerd[1736]: time="2025-11-08T00:23:57.819116900Z" level=info msg="Connect containerd service" Nov 8 00:23:57.820033 containerd[1736]: time="2025-11-08T00:23:57.819175100Z" level=info msg="using legacy CRI server" Nov 8 00:23:57.820033 containerd[1736]: time="2025-11-08T00:23:57.819185700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:23:57.820033 containerd[1736]: time="2025-11-08T00:23:57.819328500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.822553000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.822952000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823014300Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823075100Z" level=info msg="Start subscribing containerd event" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823125500Z" level=info msg="Start recovering state" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823204000Z" level=info msg="Start event monitor" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823222600Z" level=info msg="Start snapshots syncer" Nov 8 00:23:57.823235 containerd[1736]: time="2025-11-08T00:23:57.823237500Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:23:57.823543 containerd[1736]: time="2025-11-08T00:23:57.823246500Z" level=info msg="Start streaming server" Nov 8 00:23:57.823436 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:23:57.828559 containerd[1736]: time="2025-11-08T00:23:57.827575700Z" level=info msg="containerd successfully booted in 0.074084s" Nov 8 00:23:58.356743 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:23:58.362484 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:23:58.367818 systemd[1]: Startup finished in 748ms (firmware) + 14.839s (loader) + 1.222s (kernel) + 11.647s (initrd) + 11.618s (userspace) = 40.075s. Nov 8 00:23:58.374292 (kubelet)[1851]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:23:58.747922 login[1833]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:23:58.749224 login[1834]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:23:58.771576 systemd-logind[1708]: New session 2 of user core. Nov 8 00:23:58.775979 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:23:58.786007 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:23:58.793815 systemd-logind[1708]: New session 1 of user core. Nov 8 00:23:58.815886 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:23:58.825141 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:23:58.832543 (systemd)[1862]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:23:59.056896 systemd[1862]: Queued start job for default target default.target. Nov 8 00:23:59.060289 systemd[1862]: Created slice app.slice - User Application Slice. Nov 8 00:23:59.060321 systemd[1862]: Reached target paths.target - Paths. Nov 8 00:23:59.060340 systemd[1862]: Reached target timers.target - Timers. Nov 8 00:23:59.063470 systemd[1862]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:23:59.087744 systemd[1862]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:23:59.087891 systemd[1862]: Reached target sockets.target - Sockets. Nov 8 00:23:59.087910 systemd[1862]: Reached target basic.target - Basic System. Nov 8 00:23:59.087957 systemd[1862]: Reached target default.target - Main User Target. Nov 8 00:23:59.087997 systemd[1862]: Startup finished in 242ms. Nov 8 00:23:59.088206 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:23:59.094884 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:23:59.095833 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:23:59.312162 waagent[1831]: 2025-11-08T00:23:59.311986Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.313321Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.314151Z INFO Daemon Daemon Python: 3.11.9 Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.315137Z INFO Daemon Daemon Run daemon Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.316329Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.317284Z INFO Daemon Daemon Using waagent for provisioning Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.318244Z INFO Daemon Daemon Activate resource disk Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.318919Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.323505Z INFO Daemon Daemon Found device: None Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.324225Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.325062Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.327642Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:23:59.344975 waagent[1831]: 2025-11-08T00:23:59.328526Z INFO Daemon Daemon Running default provisioning handler Nov 8 00:23:59.349988 waagent[1831]: 2025-11-08T00:23:59.349886Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 8 00:23:59.356936 waagent[1831]: 2025-11-08T00:23:59.356205Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 8 00:23:59.360687 waagent[1831]: 2025-11-08T00:23:59.360602Z INFO Daemon Daemon cloud-init is enabled: False Nov 8 00:23:59.363891 waagent[1831]: 2025-11-08T00:23:59.363332Z INFO Daemon Daemon Copying ovf-env.xml Nov 8 00:23:59.386130 kubelet[1851]: E1108 00:23:59.386078 1851 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:23:59.388967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:23:59.389169 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:23:59.389654 systemd[1]: kubelet.service: Consumed 1.075s CPU time. Nov 8 00:23:59.468871 waagent[1831]: 2025-11-08T00:23:59.467924Z INFO Daemon Daemon Successfully mounted dvd Nov 8 00:23:59.483794 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 8 00:23:59.486149 waagent[1831]: 2025-11-08T00:23:59.486078Z INFO Daemon Daemon Detect protocol endpoint Nov 8 00:23:59.500099 waagent[1831]: 2025-11-08T00:23:59.487295Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:23:59.500099 waagent[1831]: 2025-11-08T00:23:59.488224Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 8 00:23:59.500099 waagent[1831]: 2025-11-08T00:23:59.489161Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 8 00:23:59.500099 waagent[1831]: 2025-11-08T00:23:59.490241Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 8 00:23:59.500099 waagent[1831]: 2025-11-08T00:23:59.491033Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 8 00:23:59.514481 waagent[1831]: 2025-11-08T00:23:59.514420Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 8 00:23:59.521675 waagent[1831]: 2025-11-08T00:23:59.515845Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 8 00:23:59.521675 waagent[1831]: 2025-11-08T00:23:59.516325Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 8 00:23:59.651506 waagent[1831]: 2025-11-08T00:23:59.651344Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 8 00:23:59.655001 waagent[1831]: 2025-11-08T00:23:59.654927Z INFO Daemon Daemon Forcing an update of the goal state. Nov 8 00:23:59.661206 waagent[1831]: 2025-11-08T00:23:59.661148Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:23:59.681955 waagent[1831]: 2025-11-08T00:23:59.681891Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 8 00:23:59.687575 waagent[1831]: 2025-11-08T00:23:59.685052Z INFO Daemon Nov 8 00:23:59.687656 waagent[1831]: 2025-11-08T00:23:59.687587Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f7f4edd1-e6ee-4f43-99ae-b455976a88dc eTag: 8587867016854983001 source: Fabric] Nov 8 00:23:59.700971 waagent[1831]: 2025-11-08T00:23:59.691439Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 8 00:23:59.700971 waagent[1831]: 2025-11-08T00:23:59.697398Z INFO Daemon Nov 8 00:23:59.700971 waagent[1831]: 2025-11-08T00:23:59.700906Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:23:59.707289 waagent[1831]: 2025-11-08T00:23:59.707241Z INFO Daemon Daemon Downloading artifacts profile blob Nov 8 00:23:59.830002 waagent[1831]: 2025-11-08T00:23:59.829913Z INFO Daemon Downloaded certificate {'thumbprint': '3F1D174BD29838D322098F4AF5D0DB7FA98D1C77', 'hasPrivateKey': True} Nov 8 00:23:59.835826 waagent[1831]: 2025-11-08T00:23:59.835764Z INFO Daemon Fetch goal state completed Nov 8 00:23:59.846382 waagent[1831]: 2025-11-08T00:23:59.846312Z INFO Daemon Daemon Starting provisioning Nov 8 00:23:59.849025 waagent[1831]: 2025-11-08T00:23:59.848969Z INFO Daemon Daemon Handle ovf-env.xml. Nov 8 00:23:59.853727 waagent[1831]: 2025-11-08T00:23:59.850006Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-036966ce4d] Nov 8 00:23:59.869599 waagent[1831]: 2025-11-08T00:23:59.869522Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-036966ce4d] Nov 8 00:23:59.876413 waagent[1831]: 2025-11-08T00:23:59.870733Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 8 00:23:59.876413 waagent[1831]: 2025-11-08T00:23:59.871454Z INFO Daemon Daemon Primary interface is [eth0] Nov 8 00:23:59.894471 systemd-networkd[1354]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:59.894481 systemd-networkd[1354]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:59.894530 systemd-networkd[1354]: eth0: DHCP lease lost Nov 8 00:23:59.895935 waagent[1831]: 2025-11-08T00:23:59.895840Z INFO Daemon Daemon Create user account if not exists Nov 8 00:23:59.911170 waagent[1831]: 2025-11-08T00:23:59.897325Z INFO Daemon Daemon User core already exists, skip useradd Nov 8 00:23:59.911170 waagent[1831]: 2025-11-08T00:23:59.897991Z INFO Daemon Daemon Configure sudoer Nov 8 00:23:59.911170 waagent[1831]: 2025-11-08T00:23:59.898978Z INFO Daemon Daemon Configure sshd Nov 8 00:23:59.911170 waagent[1831]: 2025-11-08T00:23:59.899935Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 8 00:23:59.911170 waagent[1831]: 2025-11-08T00:23:59.901077Z INFO Daemon Daemon Deploy ssh public key. Nov 8 00:23:59.913791 systemd-networkd[1354]: eth0: DHCPv6 lease lost Nov 8 00:23:59.950780 systemd-networkd[1354]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:24:01.025979 waagent[1831]: 2025-11-08T00:24:01.025916Z INFO Daemon Daemon Provisioning complete Nov 8 00:24:01.040056 waagent[1831]: 2025-11-08T00:24:01.039991Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 8 00:24:01.047637 waagent[1831]: 2025-11-08T00:24:01.041990Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 8 00:24:01.047637 waagent[1831]: 2025-11-08T00:24:01.043318Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 8 00:24:01.170654 waagent[1915]: 2025-11-08T00:24:01.170559Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 8 00:24:01.171063 waagent[1915]: 2025-11-08T00:24:01.170738Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 8 00:24:01.171063 waagent[1915]: 2025-11-08T00:24:01.170835Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 8 00:24:01.219401 waagent[1915]: 2025-11-08T00:24:01.219291Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 8 00:24:01.219662 waagent[1915]: 2025-11-08T00:24:01.219606Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:01.219778 waagent[1915]: 2025-11-08T00:24:01.219736Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:01.227753 waagent[1915]: 2025-11-08T00:24:01.227665Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:24:01.233236 waagent[1915]: 2025-11-08T00:24:01.233175Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 8 00:24:01.233713 waagent[1915]: 2025-11-08T00:24:01.233652Z INFO ExtHandler Nov 8 00:24:01.233804 waagent[1915]: 2025-11-08T00:24:01.233772Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ca4c3ab3-c548-448c-abe2-210e9d7bde0f eTag: 8587867016854983001 source: Fabric] Nov 8 00:24:01.234147 waagent[1915]: 2025-11-08T00:24:01.234093Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 8 00:24:01.234712 waagent[1915]: 2025-11-08T00:24:01.234653Z INFO ExtHandler Nov 8 00:24:01.234805 waagent[1915]: 2025-11-08T00:24:01.234759Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:24:01.238306 waagent[1915]: 2025-11-08T00:24:01.238262Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 8 00:24:01.311033 waagent[1915]: 2025-11-08T00:24:01.310879Z INFO ExtHandler Downloaded certificate {'thumbprint': '3F1D174BD29838D322098F4AF5D0DB7FA98D1C77', 'hasPrivateKey': True} Nov 8 00:24:01.311531 waagent[1915]: 2025-11-08T00:24:01.311471Z INFO ExtHandler Fetch goal state completed Nov 8 00:24:01.328770 waagent[1915]: 2025-11-08T00:24:01.328674Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1915 Nov 8 00:24:01.328966 waagent[1915]: 2025-11-08T00:24:01.328910Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 8 00:24:01.330574 waagent[1915]: 2025-11-08T00:24:01.330506Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 8 00:24:01.330947 waagent[1915]: 2025-11-08T00:24:01.330896Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 8 00:24:01.398902 waagent[1915]: 2025-11-08T00:24:01.398852Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 8 00:24:01.399147 waagent[1915]: 2025-11-08T00:24:01.399094Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 8 00:24:01.405899 waagent[1915]: 2025-11-08T00:24:01.405854Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 8 00:24:01.413077 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit waagent.service)... Nov 8 00:24:01.413095 systemd[1]: Reloading... Nov 8 00:24:01.497746 zram_generator::config[1962]: No configuration found. Nov 8 00:24:01.627751 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:01.710121 systemd[1]: Reloading finished in 296 ms. Nov 8 00:24:01.738347 waagent[1915]: 2025-11-08T00:24:01.738110Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 8 00:24:01.746215 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit waagent.service)... Nov 8 00:24:01.746319 systemd[1]: Reloading... Nov 8 00:24:01.811333 zram_generator::config[2049]: No configuration found. Nov 8 00:24:01.950765 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:02.033186 systemd[1]: Reloading finished in 286 ms. Nov 8 00:24:02.065835 waagent[1915]: 2025-11-08T00:24:02.062668Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 8 00:24:02.065835 waagent[1915]: 2025-11-08T00:24:02.062918Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 8 00:24:03.164623 waagent[1915]: 2025-11-08T00:24:03.164508Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 8 00:24:03.165451 waagent[1915]: 2025-11-08T00:24:03.165374Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 8 00:24:03.166414 waagent[1915]: 2025-11-08T00:24:03.166338Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 8 00:24:03.166913 waagent[1915]: 2025-11-08T00:24:03.166846Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 8 00:24:03.167138 waagent[1915]: 2025-11-08T00:24:03.167069Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:03.167265 waagent[1915]: 2025-11-08T00:24:03.167219Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:03.167541 waagent[1915]: 2025-11-08T00:24:03.167481Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 8 00:24:03.167872 waagent[1915]: 2025-11-08T00:24:03.167816Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 8 00:24:03.167872 waagent[1915]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 8 00:24:03.167872 waagent[1915]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 8 00:24:03.167872 waagent[1915]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 8 00:24:03.167872 waagent[1915]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:03.167872 waagent[1915]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:03.167872 waagent[1915]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:03.168202 waagent[1915]: 2025-11-08T00:24:03.168100Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:03.168261 waagent[1915]: 2025-11-08T00:24:03.168214Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:03.168814 waagent[1915]: 2025-11-08T00:24:03.168600Z INFO EnvHandler ExtHandler Configure routes Nov 8 00:24:03.168814 waagent[1915]: 2025-11-08T00:24:03.168744Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 8 00:24:03.169011 waagent[1915]: 2025-11-08T00:24:03.168875Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 8 00:24:03.169072 waagent[1915]: 2025-11-08T00:24:03.169020Z INFO EnvHandler ExtHandler Gateway:None Nov 8 00:24:03.169214 waagent[1915]: 2025-11-08T00:24:03.169122Z INFO EnvHandler ExtHandler Routes:None Nov 8 00:24:03.170573 waagent[1915]: 2025-11-08T00:24:03.170499Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 8 00:24:03.170902 waagent[1915]: 2025-11-08T00:24:03.170842Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 8 00:24:03.171459 waagent[1915]: 2025-11-08T00:24:03.171391Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 8 00:24:03.177193 waagent[1915]: 2025-11-08T00:24:03.177151Z INFO ExtHandler ExtHandler Nov 8 00:24:03.177291 waagent[1915]: 2025-11-08T00:24:03.177250Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: cf1344af-3c53-4757-adef-09f00b33eda2 correlation f401923a-1e0c-4375-9dfd-5324b80e8462 created: 2025-11-08T00:23:06.306328Z] Nov 8 00:24:03.178134 waagent[1915]: 2025-11-08T00:24:03.178091Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 8 00:24:03.179324 waagent[1915]: 2025-11-08T00:24:03.179282Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Nov 8 00:24:03.213377 waagent[1915]: 2025-11-08T00:24:03.213269Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 85126D22-E3DD-4033-AF24-DDA5B27D313C;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 8 00:24:03.263955 waagent[1915]: 2025-11-08T00:24:03.263867Z INFO MonitorHandler ExtHandler Network interfaces: Nov 8 00:24:03.263955 waagent[1915]: Executing ['ip', '-a', '-o', 'link']: Nov 8 00:24:03.263955 waagent[1915]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 8 00:24:03.263955 waagent[1915]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:9c:71 brd ff:ff:ff:ff:ff:ff Nov 8 00:24:03.263955 waagent[1915]: 3: enP26209s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:9c:71 brd ff:ff:ff:ff:ff:ff\ altname enP26209p0s2 Nov 8 00:24:03.263955 waagent[1915]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 8 00:24:03.263955 waagent[1915]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 8 00:24:03.263955 waagent[1915]: 2: eth0 inet 10.200.8.41/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 8 00:24:03.263955 waagent[1915]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 8 00:24:03.263955 waagent[1915]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 8 00:24:03.263955 waagent[1915]: 2: eth0 inet6 fe80::20d:3aff:feb3:9c71/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 8 00:24:03.290234 waagent[1915]: 2025-11-08T00:24:03.290167Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 8 00:24:03.290234 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.290234 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.290234 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.290234 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.290234 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.290234 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.290234 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:24:03.290234 waagent[1915]: 7 569 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:24:03.290234 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:24:03.293995 waagent[1915]: 2025-11-08T00:24:03.293897Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 8 00:24:03.293995 waagent[1915]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.293995 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.293995 waagent[1915]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.293995 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.293995 waagent[1915]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:03.293995 waagent[1915]: pkts bytes target prot opt in out source destination Nov 8 00:24:03.293995 waagent[1915]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:24:03.293995 waagent[1915]: 11 1154 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:24:03.293995 waagent[1915]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:24:03.294394 waagent[1915]: 2025-11-08T00:24:03.294206Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 8 00:24:09.639378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:24:09.645956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:09.772314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:09.777336 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:10.476233 kubelet[2149]: E1108 00:24:10.476178 2149 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:10.480177 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:10.480381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:20.279768 chronyd[1707]: Selected source PHC0 Nov 8 00:24:20.639302 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:24:20.644981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:20.757598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:20.771052 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:21.498000 kubelet[2163]: E1108 00:24:21.497943 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:21.500554 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:21.500757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:22.486594 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:24:22.493023 systemd[1]: Started sshd@0-10.200.8.41:22-10.200.16.10:39266.service - OpenSSH per-connection server daemon (10.200.16.10:39266). Nov 8 00:24:23.178293 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 39266 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:23.179890 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:23.184097 systemd-logind[1708]: New session 3 of user core. Nov 8 00:24:23.194902 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:24:23.733020 systemd[1]: Started sshd@1-10.200.8.41:22-10.200.16.10:39278.service - OpenSSH per-connection server daemon (10.200.16.10:39278). Nov 8 00:24:24.364914 sshd[2176]: Accepted publickey for core from 10.200.16.10 port 39278 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:24.366809 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:24.370765 systemd-logind[1708]: New session 4 of user core. Nov 8 00:24:24.376887 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:24:24.822809 sshd[2176]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:24.825849 systemd[1]: sshd@1-10.200.8.41:22-10.200.16.10:39278.service: Deactivated successfully. Nov 8 00:24:24.828136 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:24:24.830079 systemd-logind[1708]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:24:24.831171 systemd-logind[1708]: Removed session 4. Nov 8 00:24:24.932546 systemd[1]: Started sshd@2-10.200.8.41:22-10.200.16.10:39292.service - OpenSSH per-connection server daemon (10.200.16.10:39292). Nov 8 00:24:25.556060 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 39292 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:25.557534 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:25.563327 systemd-logind[1708]: New session 5 of user core. Nov 8 00:24:25.572973 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:24:25.996514 sshd[2183]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:26.000502 systemd[1]: sshd@2-10.200.8.41:22-10.200.16.10:39292.service: Deactivated successfully. Nov 8 00:24:26.002333 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:24:26.003042 systemd-logind[1708]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:24:26.004352 systemd-logind[1708]: Removed session 5. Nov 8 00:24:26.106637 systemd[1]: Started sshd@3-10.200.8.41:22-10.200.16.10:39302.service - OpenSSH per-connection server daemon (10.200.16.10:39302). Nov 8 00:24:26.733294 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 39302 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:26.734996 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:26.740362 systemd-logind[1708]: New session 6 of user core. Nov 8 00:24:26.749892 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:24:27.180549 sshd[2190]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:27.183762 systemd[1]: sshd@3-10.200.8.41:22-10.200.16.10:39302.service: Deactivated successfully. Nov 8 00:24:27.185691 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:24:27.187120 systemd-logind[1708]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:24:27.188300 systemd-logind[1708]: Removed session 6. Nov 8 00:24:27.303245 systemd[1]: Started sshd@4-10.200.8.41:22-10.200.16.10:39308.service - OpenSSH per-connection server daemon (10.200.16.10:39308). Nov 8 00:24:27.939632 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 39308 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:27.941439 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:27.947182 systemd-logind[1708]: New session 7 of user core. Nov 8 00:24:27.955913 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:24:28.410299 sudo[2200]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:24:28.410804 sudo[2200]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:28.500272 sudo[2200]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:28.601287 sshd[2197]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:28.606096 systemd[1]: sshd@4-10.200.8.41:22-10.200.16.10:39308.service: Deactivated successfully. Nov 8 00:24:28.608077 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:24:28.608789 systemd-logind[1708]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:24:28.609785 systemd-logind[1708]: Removed session 7. Nov 8 00:24:28.715232 systemd[1]: Started sshd@5-10.200.8.41:22-10.200.16.10:39320.service - OpenSSH per-connection server daemon (10.200.16.10:39320). Nov 8 00:24:29.334006 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 39320 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:29.370297 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:29.375855 systemd-logind[1708]: New session 8 of user core. Nov 8 00:24:29.381888 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:24:29.678815 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:24:29.679180 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:29.682356 sudo[2209]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:29.687327 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:24:29.687666 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:29.706105 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:29.707856 auditctl[2212]: No rules Nov 8 00:24:29.708223 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:24:29.708432 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:29.711029 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:29.742908 augenrules[2230]: No rules Nov 8 00:24:29.744275 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:29.745533 sudo[2208]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:29.847280 sshd[2205]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:29.850735 systemd[1]: sshd@5-10.200.8.41:22-10.200.16.10:39320.service: Deactivated successfully. Nov 8 00:24:29.852661 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:24:29.854316 systemd-logind[1708]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:24:29.855207 systemd-logind[1708]: Removed session 8. Nov 8 00:24:29.979011 systemd[1]: Started sshd@6-10.200.8.41:22-10.200.16.10:59766.service - OpenSSH per-connection server daemon (10.200.16.10:59766). Nov 8 00:24:30.597791 sshd[2238]: Accepted publickey for core from 10.200.16.10 port 59766 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:30.599436 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:30.603763 systemd-logind[1708]: New session 9 of user core. Nov 8 00:24:30.610855 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:24:30.945352 sudo[2241]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:24:30.945745 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:31.639189 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:24:31.646034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:32.755747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:32.761125 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:32.801853 kubelet[2259]: E1108 00:24:32.801795 2259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:32.804911 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:32.805088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:33.159116 (dockerd)[2272]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:24:33.159319 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:24:35.766544 dockerd[2272]: time="2025-11-08T00:24:35.766482485Z" level=info msg="Starting up" Nov 8 00:24:38.159030 dockerd[2272]: time="2025-11-08T00:24:38.158971672Z" level=info msg="Loading containers: start." Nov 8 00:24:38.664738 kernel: Initializing XFRM netlink socket Nov 8 00:24:38.794015 systemd-networkd[1354]: docker0: Link UP Nov 8 00:24:38.819990 dockerd[2272]: time="2025-11-08T00:24:38.819950341Z" level=info msg="Loading containers: done." Nov 8 00:24:39.815607 dockerd[2272]: time="2025-11-08T00:24:39.815528926Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:24:39.816284 dockerd[2272]: time="2025-11-08T00:24:39.815753627Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:24:39.816284 dockerd[2272]: time="2025-11-08T00:24:39.815958028Z" level=info msg="Daemon has completed initialization" Nov 8 00:24:39.863255 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 8 00:24:40.327843 dockerd[2272]: time="2025-11-08T00:24:40.327763654Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:24:40.328103 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:24:41.553189 containerd[1736]: time="2025-11-08T00:24:41.553143800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 8 00:24:41.883682 update_engine[1709]: I20251108 00:24:41.883484 1709 update_attempter.cc:509] Updating boot flags... Nov 8 00:24:41.936762 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2418) Nov 8 00:24:42.165732 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2420) Nov 8 00:24:42.265727 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2420) Nov 8 00:24:42.889258 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:24:42.894994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:43.018751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:43.032315 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:43.074571 kubelet[2507]: E1108 00:24:43.074467 2507 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:43.077792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:43.078048 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:48.139145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225623509.mount: Deactivated successfully. Nov 8 00:24:53.139308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:24:53.144965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:53.424516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:53.436062 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:53.472391 kubelet[2536]: E1108 00:24:53.472332 2536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:53.475422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:53.475739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:55.426037 containerd[1736]: time="2025-11-08T00:24:55.425978205Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:55.428397 containerd[1736]: time="2025-11-08T00:24:55.428131319Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=30114901" Nov 8 00:24:55.430898 containerd[1736]: time="2025-11-08T00:24:55.430836135Z" level=info msg="ImageCreate event name:\"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:55.435535 containerd[1736]: time="2025-11-08T00:24:55.435141662Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:55.436368 containerd[1736]: time="2025-11-08T00:24:55.436328469Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"30111492\" in 13.883140669s" Nov 8 00:24:55.436447 containerd[1736]: time="2025-11-08T00:24:55.436374169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:b7335a56022aba291f5df653c01b7ab98d64fb5cab221378617f4a1236e06a62\"" Nov 8 00:24:55.437083 containerd[1736]: time="2025-11-08T00:24:55.437047773Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 8 00:24:57.058178 containerd[1736]: time="2025-11-08T00:24:57.058119500Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:57.060618 containerd[1736]: time="2025-11-08T00:24:57.060482215Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=26020852" Nov 8 00:24:57.063720 containerd[1736]: time="2025-11-08T00:24:57.063570834Z" level=info msg="ImageCreate event name:\"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:57.068177 containerd[1736]: time="2025-11-08T00:24:57.067841460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:57.069199 containerd[1736]: time="2025-11-08T00:24:57.069162768Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"27681301\" in 1.632075695s" Nov 8 00:24:57.069289 containerd[1736]: time="2025-11-08T00:24:57.069197368Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:8bb43160a0df4d7d34c89d9edbc48735bc2f830771e4b501937338221be0f668\"" Nov 8 00:24:57.070078 containerd[1736]: time="2025-11-08T00:24:57.070049973Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 8 00:24:58.505300 containerd[1736]: time="2025-11-08T00:24:58.505245262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:58.509054 containerd[1736]: time="2025-11-08T00:24:58.508987085Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=20155576" Nov 8 00:24:58.511812 containerd[1736]: time="2025-11-08T00:24:58.511778302Z" level=info msg="ImageCreate event name:\"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:58.518212 containerd[1736]: time="2025-11-08T00:24:58.517001234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:24:58.518212 containerd[1736]: time="2025-11-08T00:24:58.518062541Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"21816043\" in 1.447980767s" Nov 8 00:24:58.518212 containerd[1736]: time="2025-11-08T00:24:58.518100441Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:33b680aadf474b7e5e73957fc00c6af86dd0484c699c8461ba33ee656d1823bf\"" Nov 8 00:24:58.519246 containerd[1736]: time="2025-11-08T00:24:58.519220748Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 8 00:24:59.819492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3820883472.mount: Deactivated successfully. Nov 8 00:25:00.391959 containerd[1736]: time="2025-11-08T00:25:00.391892696Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:00.394471 containerd[1736]: time="2025-11-08T00:25:00.394318010Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=31929477" Nov 8 00:25:00.396840 containerd[1736]: time="2025-11-08T00:25:00.396797724Z" level=info msg="ImageCreate event name:\"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:00.400685 containerd[1736]: time="2025-11-08T00:25:00.400529246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:00.401614 containerd[1736]: time="2025-11-08T00:25:00.401151149Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"31928488\" in 1.881897101s" Nov 8 00:25:00.401614 containerd[1736]: time="2025-11-08T00:25:00.401191150Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:2844ee7bb56c2c194e1f4adafb9e7b60b9ed16aa4d07ab8ad1f019362e2efab3\"" Nov 8 00:25:00.401751 containerd[1736]: time="2025-11-08T00:25:00.401730653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 8 00:25:01.000167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2603815799.mount: Deactivated successfully. Nov 8 00:25:02.302884 containerd[1736]: time="2025-11-08T00:25:02.302827233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.307572 containerd[1736]: time="2025-11-08T00:25:02.307505960Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Nov 8 00:25:02.310364 containerd[1736]: time="2025-11-08T00:25:02.310308876Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.315158 containerd[1736]: time="2025-11-08T00:25:02.315105304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.316726 containerd[1736]: time="2025-11-08T00:25:02.316149010Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.914387057s" Nov 8 00:25:02.316726 containerd[1736]: time="2025-11-08T00:25:02.316190710Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Nov 8 00:25:02.317064 containerd[1736]: time="2025-11-08T00:25:02.317036315Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:25:02.865362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845484270.mount: Deactivated successfully. Nov 8 00:25:02.884166 containerd[1736]: time="2025-11-08T00:25:02.884117890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.888560 containerd[1736]: time="2025-11-08T00:25:02.888391315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Nov 8 00:25:02.892406 containerd[1736]: time="2025-11-08T00:25:02.891242331Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.896919 containerd[1736]: time="2025-11-08T00:25:02.895767057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:02.896919 containerd[1736]: time="2025-11-08T00:25:02.896454961Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 579.386746ms" Nov 8 00:25:02.896919 containerd[1736]: time="2025-11-08T00:25:02.896510762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 8 00:25:02.897346 containerd[1736]: time="2025-11-08T00:25:02.897319766Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 8 00:25:03.443161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4259476002.mount: Deactivated successfully. Nov 8 00:25:03.639569 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:25:03.647985 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:03.829859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:03.841031 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:03.935713 kubelet[2680]: E1108 00:25:03.935652 2680 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:03.937712 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:03.937892 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:06.408024 containerd[1736]: time="2025-11-08T00:25:06.407952242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:06.410831 containerd[1736]: time="2025-11-08T00:25:06.410578958Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58378441" Nov 8 00:25:06.414658 containerd[1736]: time="2025-11-08T00:25:06.414284579Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:06.418820 containerd[1736]: time="2025-11-08T00:25:06.418773305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:06.420325 containerd[1736]: time="2025-11-08T00:25:06.420289814Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.522942048s" Nov 8 00:25:06.420422 containerd[1736]: time="2025-11-08T00:25:06.420324814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Nov 8 00:25:10.486352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:10.502024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:10.533195 systemd[1]: Reloading requested from client PID 2758 ('systemctl') (unit session-9.scope)... Nov 8 00:25:10.533211 systemd[1]: Reloading... Nov 8 00:25:10.646756 zram_generator::config[2798]: No configuration found. Nov 8 00:25:10.777718 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:10.879194 systemd[1]: Reloading finished in 345 ms. Nov 8 00:25:10.933017 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:10.938294 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:10.939650 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:25:10.939893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:10.945137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:11.991690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:12.007051 (kubelet)[2870]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:12.048188 kubelet[2870]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:12.048188 kubelet[2870]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:12.048188 kubelet[2870]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:12.048904 kubelet[2870]: I1108 00:25:12.048437 2870 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:12.926786 kubelet[2870]: I1108 00:25:12.926689 2870 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:25:12.926786 kubelet[2870]: I1108 00:25:12.926771 2870 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:12.927112 kubelet[2870]: I1108 00:25:12.927089 2870 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:12.952880 kubelet[2870]: I1108 00:25:12.952845 2870 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:12.956048 kubelet[2870]: E1108 00:25:12.955690 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:12.964834 kubelet[2870]: E1108 00:25:12.964775 2870 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:12.964834 kubelet[2870]: I1108 00:25:12.964827 2870 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:12.969038 kubelet[2870]: I1108 00:25:12.969011 2870 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:25:12.969306 kubelet[2870]: I1108 00:25:12.969264 2870 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:12.969486 kubelet[2870]: I1108 00:25:12.969298 2870 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-036966ce4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:12.969633 kubelet[2870]: I1108 00:25:12.969487 2870 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:12.969633 kubelet[2870]: I1108 00:25:12.969511 2870 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:25:12.969737 kubelet[2870]: I1108 00:25:12.969654 2870 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:12.972938 kubelet[2870]: I1108 00:25:12.972674 2870 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:25:12.972938 kubelet[2870]: I1108 00:25:12.972712 2870 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:12.972938 kubelet[2870]: I1108 00:25:12.972740 2870 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:25:12.974767 kubelet[2870]: I1108 00:25:12.974498 2870 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:12.981250 kubelet[2870]: I1108 00:25:12.981218 2870 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:12.982362 kubelet[2870]: I1108 00:25:12.981816 2870 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:12.982714 kubelet[2870]: W1108 00:25:12.982686 2870 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:25:12.985300 kubelet[2870]: I1108 00:25:12.985279 2870 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:25:12.985378 kubelet[2870]: I1108 00:25:12.985336 2870 server.go:1289] "Started kubelet" Nov 8 00:25:12.986201 kubelet[2870]: E1108 00:25:12.985559 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-036966ce4d&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:12.990066 kubelet[2870]: E1108 00:25:12.990019 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:12.991452 kubelet[2870]: I1108 00:25:12.990904 2870 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:12.991452 kubelet[2870]: I1108 00:25:12.991192 2870 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:12.991452 kubelet[2870]: I1108 00:25:12.991299 2870 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:12.993107 kubelet[2870]: I1108 00:25:12.993076 2870 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:12.996466 kubelet[2870]: E1108 00:25:12.994899 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-036966ce4d.1875e05b8c439743 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-036966ce4d,UID:ci-4081.3.6-n-036966ce4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-036966ce4d,},FirstTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,LastTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-036966ce4d,}" Nov 8 00:25:12.998726 kubelet[2870]: E1108 00:25:12.998706 2870 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:12.999965 kubelet[2870]: I1108 00:25:12.999945 2870 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:25:13.001164 kubelet[2870]: I1108 00:25:13.001135 2870 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:13.004625 kubelet[2870]: I1108 00:25:13.004602 2870 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:25:13.004900 kubelet[2870]: E1108 00:25:13.004874 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:13.005736 kubelet[2870]: E1108 00:25:13.005670 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="200ms" Nov 8 00:25:13.005905 kubelet[2870]: I1108 00:25:13.005882 2870 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:13.006003 kubelet[2870]: I1108 00:25:13.005982 2870 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:13.006563 kubelet[2870]: I1108 00:25:13.006541 2870 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:25:13.006628 kubelet[2870]: I1108 00:25:13.006586 2870 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:25:13.007473 kubelet[2870]: E1108 00:25:13.007441 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:13.008000 kubelet[2870]: I1108 00:25:13.007978 2870 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:13.060993 kubelet[2870]: I1108 00:25:13.060685 2870 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:13.060993 kubelet[2870]: I1108 00:25:13.060721 2870 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:13.060993 kubelet[2870]: I1108 00:25:13.060741 2870 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:13.064628 kubelet[2870]: I1108 00:25:13.064200 2870 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:13.067103 kubelet[2870]: I1108 00:25:13.066172 2870 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:13.067103 kubelet[2870]: I1108 00:25:13.066199 2870 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:25:13.067103 kubelet[2870]: I1108 00:25:13.066222 2870 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:13.067103 kubelet[2870]: I1108 00:25:13.066234 2870 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:25:13.067103 kubelet[2870]: E1108 00:25:13.066280 2870 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:13.068744 kubelet[2870]: E1108 00:25:13.068527 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:13.069009 kubelet[2870]: I1108 00:25:13.068875 2870 policy_none.go:49] "None policy: Start" Nov 8 00:25:13.069009 kubelet[2870]: I1108 00:25:13.068896 2870 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:25:13.069009 kubelet[2870]: I1108 00:25:13.068910 2870 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:25:13.076566 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:25:13.085912 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:25:13.093312 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:25:13.094585 kubelet[2870]: E1108 00:25:13.094554 2870 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:13.094810 kubelet[2870]: I1108 00:25:13.094790 2870 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:13.094882 kubelet[2870]: I1108 00:25:13.094808 2870 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:13.096064 kubelet[2870]: I1108 00:25:13.096039 2870 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:13.097180 kubelet[2870]: E1108 00:25:13.097160 2870 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:13.097271 kubelet[2870]: E1108 00:25:13.097206 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:13.181047 systemd[1]: Created slice kubepods-burstable-podfcd8fe706feac2e3fa2973585b34daef.slice - libcontainer container kubepods-burstable-podfcd8fe706feac2e3fa2973585b34daef.slice. Nov 8 00:25:13.189317 kubelet[2870]: E1108 00:25:13.189279 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.193600 systemd[1]: Created slice kubepods-burstable-pod98b41822aedf8a4fc6c20febfd21e1e5.slice - libcontainer container kubepods-burstable-pod98b41822aedf8a4fc6c20febfd21e1e5.slice. Nov 8 00:25:13.195965 kubelet[2870]: E1108 00:25:13.195565 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.197185 kubelet[2870]: I1108 00:25:13.197161 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.197650 kubelet[2870]: E1108 00:25:13.197624 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.199423 systemd[1]: Created slice kubepods-burstable-pod1bcc7ad69bea3c7ad2bec9ff032e9515.slice - libcontainer container kubepods-burstable-pod1bcc7ad69bea3c7ad2bec9ff032e9515.slice. Nov 8 00:25:13.201041 kubelet[2870]: E1108 00:25:13.201018 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.206727 kubelet[2870]: E1108 00:25:13.206688 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="400ms" Nov 8 00:25:13.206941 kubelet[2870]: I1108 00:25:13.206919 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207033 kubelet[2870]: I1108 00:25:13.206950 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207033 kubelet[2870]: I1108 00:25:13.206975 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207033 kubelet[2870]: I1108 00:25:13.207016 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207271 kubelet[2870]: I1108 00:25:13.207041 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207271 kubelet[2870]: I1108 00:25:13.207087 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207271 kubelet[2870]: I1108 00:25:13.207108 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207271 kubelet[2870]: I1108 00:25:13.207125 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bcc7ad69bea3c7ad2bec9ff032e9515-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-036966ce4d\" (UID: \"1bcc7ad69bea3c7ad2bec9ff032e9515\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.207271 kubelet[2870]: I1108 00:25:13.207140 2870 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.399546 kubelet[2870]: I1108 00:25:13.399511 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.399959 kubelet[2870]: E1108 00:25:13.399924 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.490898 containerd[1736]: time="2025-11-08T00:25:13.490756343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-036966ce4d,Uid:fcd8fe706feac2e3fa2973585b34daef,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:13.497670 containerd[1736]: time="2025-11-08T00:25:13.497621787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-036966ce4d,Uid:98b41822aedf8a4fc6c20febfd21e1e5,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:13.502731 containerd[1736]: time="2025-11-08T00:25:13.502664620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-036966ce4d,Uid:1bcc7ad69bea3c7ad2bec9ff032e9515,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:13.607342 kubelet[2870]: E1108 00:25:13.607293 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="800ms" Nov 8 00:25:13.801979 kubelet[2870]: I1108 00:25:13.801862 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.802349 kubelet[2870]: E1108 00:25:13.802241 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:13.806931 kubelet[2870]: E1108 00:25:13.806898 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:13.906202 kubelet[2870]: E1108 00:25:13.906156 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:13.921981 kubelet[2870]: E1108 00:25:13.921936 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-036966ce4d&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:13.950678 kubelet[2870]: E1108 00:25:13.950637 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:14.408534 kubelet[2870]: E1108 00:25:14.408485 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="1.6s" Nov 8 00:25:14.605043 kubelet[2870]: I1108 00:25:14.605006 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:14.605464 kubelet[2870]: E1108 00:25:14.605428 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:15.040980 kubelet[2870]: E1108 00:25:15.040938 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:15.692359 kubelet[2870]: E1108 00:25:15.692313 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-036966ce4d&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:15.839625 kubelet[2870]: E1108 00:25:15.839573 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:16.009518 kubelet[2870]: E1108 00:25:16.009395 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="3.2s" Nov 8 00:25:16.117158 kubelet[2870]: E1108 00:25:16.117107 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:16.207731 kubelet[2870]: I1108 00:25:16.207683 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:16.208787 kubelet[2870]: E1108 00:25:16.208751 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:16.650518 kubelet[2870]: E1108 00:25:16.650476 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:19.078064 kubelet[2870]: E1108 00:25:19.078020 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:21.365101 kubelet[2870]: E1108 00:25:19.210170 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="6.4s" Nov 8 00:25:21.365101 kubelet[2870]: E1108 00:25:19.401730 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-036966ce4d.1875e05b8c439743 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-036966ce4d,UID:ci-4081.3.6-n-036966ce4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-036966ce4d,},FirstTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,LastTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-036966ce4d,}" Nov 8 00:25:21.365101 kubelet[2870]: I1108 00:25:19.410659 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:21.365101 kubelet[2870]: E1108 00:25:19.411038 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:21.365101 kubelet[2870]: E1108 00:25:20.596591 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:21.365759 kubelet[2870]: E1108 00:25:21.000462 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-036966ce4d&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:21.655677 kubelet[2870]: E1108 00:25:21.655631 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:22.579308 kubelet[2870]: E1108 00:25:22.579256 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.41:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:23.097391 kubelet[2870]: E1108 00:25:23.097328 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:25.611325 kubelet[2870]: E1108 00:25:25.611279 2870 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-036966ce4d?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="7s" Nov 8 00:25:26.805757 kubelet[2870]: I1108 00:25:25.813150 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:26.805757 kubelet[2870]: E1108 00:25:25.813481 2870 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:27.580338 kubelet[2870]: E1108 00:25:27.580293 2870 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:29.125460 kubelet[2870]: E1108 00:25:29.125406 2870 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-036966ce4d&limit=500&resourceVersion=0\": dial tcp 10.200.8.41:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:29.238543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3515320563.mount: Deactivated successfully. Nov 8 00:25:29.262853 containerd[1736]: time="2025-11-08T00:25:29.262801418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:29.268717 containerd[1736]: time="2025-11-08T00:25:29.268437353Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 8 00:25:29.272049 containerd[1736]: time="2025-11-08T00:25:29.271972675Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:29.274508 containerd[1736]: time="2025-11-08T00:25:29.274429491Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:29.278007 containerd[1736]: time="2025-11-08T00:25:29.277955813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:29.281197 containerd[1736]: time="2025-11-08T00:25:29.281156833Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:29.285048 containerd[1736]: time="2025-11-08T00:25:29.284994857Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:29.290419 containerd[1736]: time="2025-11-08T00:25:29.290366991Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:29.291549 containerd[1736]: time="2025-11-08T00:25:29.291257497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 15.793524109s" Nov 8 00:25:29.293645 containerd[1736]: time="2025-11-08T00:25:29.293603211Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 15.79076359s" Nov 8 00:25:29.294355 containerd[1736]: time="2025-11-08T00:25:29.294321616Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 15.803475073s" Nov 8 00:25:29.402384 kubelet[2870]: E1108 00:25:29.402251 2870 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-036966ce4d.1875e05b8c439743 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-036966ce4d,UID:ci-4081.3.6-n-036966ce4d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-036966ce4d,},FirstTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,LastTimestamp:2025-11-08 00:25:12.985302851 +0000 UTC m=+0.974078746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-036966ce4d,}" Nov 8 00:25:29.945967 containerd[1736]: time="2025-11-08T00:25:29.945068405Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:29.945967 containerd[1736]: time="2025-11-08T00:25:29.945156706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:29.945967 containerd[1736]: time="2025-11-08T00:25:29.945177806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:29.945967 containerd[1736]: time="2025-11-08T00:25:29.945302907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:29.947763 containerd[1736]: time="2025-11-08T00:25:29.947147918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:29.947763 containerd[1736]: time="2025-11-08T00:25:29.947359520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:29.947763 containerd[1736]: time="2025-11-08T00:25:29.947384520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:29.947763 containerd[1736]: time="2025-11-08T00:25:29.947509621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:29.958891 containerd[1736]: time="2025-11-08T00:25:29.958742091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:29.958891 containerd[1736]: time="2025-11-08T00:25:29.958815992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:29.959115 containerd[1736]: time="2025-11-08T00:25:29.958866092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:29.959235 containerd[1736]: time="2025-11-08T00:25:29.959032393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:30.004012 systemd[1]: Started cri-containerd-7070d8b1531a4db391a504a1ff984c9fb4ccb1b4e07b8e59d2dc554a3921a8c2.scope - libcontainer container 7070d8b1531a4db391a504a1ff984c9fb4ccb1b4e07b8e59d2dc554a3921a8c2. Nov 8 00:25:30.016036 systemd[1]: Started cri-containerd-18aea64cac66d45f71a189e9ff04abf68a08af683a8124ba1272b7e4e6b3b0ba.scope - libcontainer container 18aea64cac66d45f71a189e9ff04abf68a08af683a8124ba1272b7e4e6b3b0ba. Nov 8 00:25:30.019306 systemd[1]: Started cri-containerd-22df39e882127e8c3263d4ada04b503b28974402dcd7fabc3da8e9d1f454aed5.scope - libcontainer container 22df39e882127e8c3263d4ada04b503b28974402dcd7fabc3da8e9d1f454aed5. Nov 8 00:25:30.080350 containerd[1736]: time="2025-11-08T00:25:30.080300955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-036966ce4d,Uid:fcd8fe706feac2e3fa2973585b34daef,Namespace:kube-system,Attempt:0,} returns sandbox id \"7070d8b1531a4db391a504a1ff984c9fb4ccb1b4e07b8e59d2dc554a3921a8c2\"" Nov 8 00:25:30.090404 containerd[1736]: time="2025-11-08T00:25:30.090363118Z" level=info msg="CreateContainer within sandbox \"7070d8b1531a4db391a504a1ff984c9fb4ccb1b4e07b8e59d2dc554a3921a8c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:25:30.112995 containerd[1736]: time="2025-11-08T00:25:30.112956560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-036966ce4d,Uid:1bcc7ad69bea3c7ad2bec9ff032e9515,Namespace:kube-system,Attempt:0,} returns sandbox id \"22df39e882127e8c3263d4ada04b503b28974402dcd7fabc3da8e9d1f454aed5\"" Nov 8 00:25:30.119474 containerd[1736]: time="2025-11-08T00:25:30.119402501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-036966ce4d,Uid:98b41822aedf8a4fc6c20febfd21e1e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"18aea64cac66d45f71a189e9ff04abf68a08af683a8124ba1272b7e4e6b3b0ba\"" Nov 8 00:25:30.132161 containerd[1736]: time="2025-11-08T00:25:30.132115981Z" level=info msg="CreateContainer within sandbox \"22df39e882127e8c3263d4ada04b503b28974402dcd7fabc3da8e9d1f454aed5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:25:30.136360 containerd[1736]: time="2025-11-08T00:25:30.136325707Z" level=info msg="CreateContainer within sandbox \"18aea64cac66d45f71a189e9ff04abf68a08af683a8124ba1272b7e4e6b3b0ba\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:25:30.151196 containerd[1736]: time="2025-11-08T00:25:30.151150700Z" level=info msg="CreateContainer within sandbox \"7070d8b1531a4db391a504a1ff984c9fb4ccb1b4e07b8e59d2dc554a3921a8c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09d760636a214a78060166fc45bd479d8a73571f47bd8c875226ea84711ce1b0\"" Nov 8 00:25:30.151863 containerd[1736]: time="2025-11-08T00:25:30.151826505Z" level=info msg="StartContainer for \"09d760636a214a78060166fc45bd479d8a73571f47bd8c875226ea84711ce1b0\"" Nov 8 00:25:30.180879 systemd[1]: Started cri-containerd-09d760636a214a78060166fc45bd479d8a73571f47bd8c875226ea84711ce1b0.scope - libcontainer container 09d760636a214a78060166fc45bd479d8a73571f47bd8c875226ea84711ce1b0. Nov 8 00:25:30.190538 containerd[1736]: time="2025-11-08T00:25:30.190481448Z" level=info msg="CreateContainer within sandbox \"22df39e882127e8c3263d4ada04b503b28974402dcd7fabc3da8e9d1f454aed5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c\"" Nov 8 00:25:30.191451 containerd[1736]: time="2025-11-08T00:25:30.191419153Z" level=info msg="StartContainer for \"9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c\"" Nov 8 00:25:30.203364 containerd[1736]: time="2025-11-08T00:25:30.201918919Z" level=info msg="CreateContainer within sandbox \"18aea64cac66d45f71a189e9ff04abf68a08af683a8124ba1272b7e4e6b3b0ba\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee\"" Nov 8 00:25:30.204548 containerd[1736]: time="2025-11-08T00:25:30.204279034Z" level=info msg="StartContainer for \"19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee\"" Nov 8 00:25:30.253723 containerd[1736]: time="2025-11-08T00:25:30.246338699Z" level=info msg="StartContainer for \"09d760636a214a78060166fc45bd479d8a73571f47bd8c875226ea84711ce1b0\" returns successfully" Nov 8 00:25:30.271414 systemd[1]: run-containerd-runc-k8s.io-9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c-runc.Ey7AaH.mount: Deactivated successfully. Nov 8 00:25:30.279894 systemd[1]: Started cri-containerd-9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c.scope - libcontainer container 9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c. Nov 8 00:25:30.305878 systemd[1]: Started cri-containerd-19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee.scope - libcontainer container 19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee. Nov 8 00:25:30.405142 containerd[1736]: time="2025-11-08T00:25:30.405093996Z" level=info msg="StartContainer for \"9eaa806aa1866ebf6c26a218465f27260215f8fb8c5aaf9da37aa62601cd604c\" returns successfully" Nov 8 00:25:30.406055 containerd[1736]: time="2025-11-08T00:25:30.405189997Z" level=info msg="StartContainer for \"19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee\" returns successfully" Nov 8 00:25:31.109609 kubelet[2870]: E1108 00:25:31.109500 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:31.113689 kubelet[2870]: E1108 00:25:31.113407 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:31.116140 kubelet[2870]: E1108 00:25:31.116118 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:31.230826 systemd[1]: run-containerd-runc-k8s.io-19d97fd5062e820166344913b29170d84c977efef7872c374733a34e47e4adee-runc.AQKlSh.mount: Deactivated successfully. Nov 8 00:25:32.119356 kubelet[2870]: E1108 00:25:32.119118 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:32.120808 kubelet[2870]: E1108 00:25:32.120624 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:32.684721 kubelet[2870]: E1108 00:25:32.683106 2870 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:32.817519 kubelet[2870]: I1108 00:25:32.817198 2870 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:32.851027 kubelet[2870]: I1108 00:25:32.850990 2870 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:32.851420 kubelet[2870]: E1108 00:25:32.851208 2870 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-036966ce4d\": node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:32.882457 kubelet[2870]: E1108 00:25:32.882363 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:32.983850 kubelet[2870]: E1108 00:25:32.983283 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.083433 kubelet[2870]: E1108 00:25:33.083388 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.097679 kubelet[2870]: E1108 00:25:33.097622 2870 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.120246 kubelet[2870]: E1108 00:25:33.119993 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:33.184074 kubelet[2870]: E1108 00:25:33.184001 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.284972 kubelet[2870]: E1108 00:25:33.284831 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.385560 kubelet[2870]: E1108 00:25:33.385496 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.486611 kubelet[2870]: E1108 00:25:33.486560 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.587993 kubelet[2870]: E1108 00:25:33.587518 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.689540 kubelet[2870]: E1108 00:25:33.688508 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.788889 kubelet[2870]: E1108 00:25:33.788842 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.889060 kubelet[2870]: E1108 00:25:33.889012 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:33.989242 kubelet[2870]: E1108 00:25:33.989138 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.089601 kubelet[2870]: E1108 00:25:34.089558 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.190818 kubelet[2870]: E1108 00:25:34.189740 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.290780 kubelet[2870]: E1108 00:25:34.290739 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.391530 kubelet[2870]: E1108 00:25:34.391481 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.492691 kubelet[2870]: E1108 00:25:34.492558 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.593240 kubelet[2870]: E1108 00:25:34.593185 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.693482 kubelet[2870]: E1108 00:25:34.693382 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.794287 kubelet[2870]: E1108 00:25:34.794146 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.895071 kubelet[2870]: E1108 00:25:34.895010 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:34.996112 kubelet[2870]: E1108 00:25:34.996068 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:35.018229 systemd[1]: Reloading requested from client PID 3158 ('systemctl') (unit session-9.scope)... Nov 8 00:25:35.018246 systemd[1]: Reloading... Nov 8 00:25:35.097579 kubelet[2870]: E1108 00:25:35.096199 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:35.117433 kubelet[2870]: E1108 00:25:35.117135 2870 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-036966ce4d\" not found" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:35.128734 zram_generator::config[3198]: No configuration found. Nov 8 00:25:35.197348 kubelet[2870]: E1108 00:25:35.197303 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:35.252828 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:35.298369 kubelet[2870]: E1108 00:25:35.298316 2870 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:35.345074 systemd[1]: Reloading finished in 326 ms. Nov 8 00:25:35.388785 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:35.408717 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:25:35.409110 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:35.409191 systemd[1]: kubelet.service: Consumed 1.430s CPU time, 129.0M memory peak, 0B memory swap peak. Nov 8 00:25:35.416069 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:35.559282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:35.571123 (kubelet)[3265]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:35.614418 kubelet[3265]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:35.614418 kubelet[3265]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:35.614418 kubelet[3265]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:35.614940 kubelet[3265]: I1108 00:25:35.614430 3265 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:35.623229 kubelet[3265]: I1108 00:25:35.623188 3265 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 8 00:25:35.623229 kubelet[3265]: I1108 00:25:35.623215 3265 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:35.623471 kubelet[3265]: I1108 00:25:35.623451 3265 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:35.624535 kubelet[3265]: I1108 00:25:35.624508 3265 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:25:35.627342 kubelet[3265]: I1108 00:25:35.626639 3265 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:35.630023 kubelet[3265]: E1108 00:25:35.629990 3265 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:35.630023 kubelet[3265]: I1108 00:25:35.630015 3265 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:35.636733 kubelet[3265]: I1108 00:25:35.635195 3265 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:25:35.636733 kubelet[3265]: I1108 00:25:35.635536 3265 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:35.636733 kubelet[3265]: I1108 00:25:35.635562 3265 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-036966ce4d","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:35.636733 kubelet[3265]: I1108 00:25:35.635891 3265 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.635905 3265 container_manager_linux.go:303] "Creating device plugin manager" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.635960 3265 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.636136 3265 kubelet.go:480] "Attempting to sync node with API server" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.636168 3265 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.636197 3265 kubelet.go:386] "Adding apiserver pod source" Nov 8 00:25:35.637018 kubelet[3265]: I1108 00:25:35.636216 3265 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:35.644500 kubelet[3265]: I1108 00:25:35.644370 3265 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:35.645327 kubelet[3265]: I1108 00:25:35.645279 3265 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:35.650236 kubelet[3265]: I1108 00:25:35.650206 3265 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:25:35.650341 kubelet[3265]: I1108 00:25:35.650262 3265 server.go:1289] "Started kubelet" Nov 8 00:25:35.652800 kubelet[3265]: I1108 00:25:35.652665 3265 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:35.665750 kubelet[3265]: I1108 00:25:35.665433 3265 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:35.667720 kubelet[3265]: I1108 00:25:35.666719 3265 server.go:317] "Adding debug handlers to kubelet server" Nov 8 00:25:35.675726 kubelet[3265]: I1108 00:25:35.674194 3265 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:35.675726 kubelet[3265]: I1108 00:25:35.674434 3265 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:35.675726 kubelet[3265]: I1108 00:25:35.674673 3265 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:35.679569 kubelet[3265]: I1108 00:25:35.679543 3265 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:25:35.680296 kubelet[3265]: E1108 00:25:35.680256 3265 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-036966ce4d\" not found" Nov 8 00:25:35.681463 kubelet[3265]: I1108 00:25:35.681186 3265 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:25:35.684885 kubelet[3265]: I1108 00:25:35.682021 3265 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:25:35.694437 kubelet[3265]: I1108 00:25:35.693744 3265 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:35.706020 kubelet[3265]: E1108 00:25:35.705998 3265 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:35.706270 kubelet[3265]: I1108 00:25:35.706256 3265 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:35.706843 kubelet[3265]: I1108 00:25:35.706826 3265 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:35.714083 kubelet[3265]: I1108 00:25:35.714050 3265 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:35.716511 kubelet[3265]: I1108 00:25:35.716464 3265 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:35.716511 kubelet[3265]: I1108 00:25:35.716512 3265 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 8 00:25:35.716664 kubelet[3265]: I1108 00:25:35.716534 3265 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:35.716664 kubelet[3265]: I1108 00:25:35.716544 3265 kubelet.go:2436] "Starting kubelet main sync loop" Nov 8 00:25:35.716664 kubelet[3265]: E1108 00:25:35.716605 3265 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:35.755516 kubelet[3265]: I1108 00:25:35.755494 3265 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:35.755676 kubelet[3265]: I1108 00:25:35.755666 3265 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:35.755758 kubelet[3265]: I1108 00:25:35.755743 3265 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:35.755892 kubelet[3265]: I1108 00:25:35.755873 3265 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:25:35.755963 kubelet[3265]: I1108 00:25:35.755888 3265 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:25:35.755963 kubelet[3265]: I1108 00:25:35.755911 3265 policy_none.go:49] "None policy: Start" Nov 8 00:25:35.755963 kubelet[3265]: I1108 00:25:35.755924 3265 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:25:35.755963 kubelet[3265]: I1108 00:25:35.755936 3265 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:25:35.756118 kubelet[3265]: I1108 00:25:35.756091 3265 state_mem.go:75] "Updated machine memory state" Nov 8 00:25:35.759592 kubelet[3265]: E1108 00:25:35.759567 3265 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:35.759775 kubelet[3265]: I1108 00:25:35.759758 3265 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:35.759840 kubelet[3265]: I1108 00:25:35.759773 3265 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:35.760734 kubelet[3265]: I1108 00:25:35.760305 3265 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:35.764992 kubelet[3265]: E1108 00:25:35.762514 3265 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:35.818256 kubelet[3265]: I1108 00:25:35.818216 3265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:35.818543 kubelet[3265]: I1108 00:25:35.818216 3265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.160982 kubelet[3265]: I1108 00:25:35.818389 3265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165102 kubelet[3265]: I1108 00:25:36.163724 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165102 kubelet[3265]: I1108 00:25:36.163774 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165102 kubelet[3265]: I1108 00:25:36.163795 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165102 kubelet[3265]: I1108 00:25:36.163820 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fcd8fe706feac2e3fa2973585b34daef-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" (UID: \"fcd8fe706feac2e3fa2973585b34daef\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165102 kubelet[3265]: I1108 00:25:36.163842 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165408 kubelet[3265]: I1108 00:25:36.163862 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165408 kubelet[3265]: I1108 00:25:36.163882 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165408 kubelet[3265]: I1108 00:25:36.163915 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/98b41822aedf8a4fc6c20febfd21e1e5-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" (UID: \"98b41822aedf8a4fc6c20febfd21e1e5\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.165408 kubelet[3265]: I1108 00:25:36.163944 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1bcc7ad69bea3c7ad2bec9ff032e9515-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-036966ce4d\" (UID: \"1bcc7ad69bea3c7ad2bec9ff032e9515\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.171107 kubelet[3265]: I1108 00:25:36.169183 3265 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.197867 kubelet[3265]: I1108 00:25:36.197830 3265 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:36.207900 kubelet[3265]: I1108 00:25:36.200268 3265 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:36.207900 kubelet[3265]: I1108 00:25:36.201263 3265 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:36.210320 kubelet[3265]: I1108 00:25:36.209955 3265 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.210320 kubelet[3265]: I1108 00:25:36.210041 3265 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.645498 kubelet[3265]: I1108 00:25:36.645212 3265 apiserver.go:52] "Watching apiserver" Nov 8 00:25:36.682641 kubelet[3265]: I1108 00:25:36.681503 3265 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:25:36.741962 kubelet[3265]: I1108 00:25:36.740898 3265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.741962 kubelet[3265]: I1108 00:25:36.741881 3265 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.756610 kubelet[3265]: I1108 00:25:36.756048 3265 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:36.756610 kubelet[3265]: E1108 00:25:36.756118 3265 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-036966ce4d\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.760626 kubelet[3265]: I1108 00:25:36.760596 3265 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:36.760859 kubelet[3265]: E1108 00:25:36.760839 3265 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-036966ce4d\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" Nov 8 00:25:36.804723 kubelet[3265]: I1108 00:25:36.804005 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-036966ce4d" podStartSLOduration=0.803983749 podStartE2EDuration="803.983749ms" podCreationTimestamp="2025-11-08 00:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:36.790245786 +0000 UTC m=+1.214287428" watchObservedRunningTime="2025-11-08 00:25:36.803983749 +0000 UTC m=+1.228025391" Nov 8 00:25:36.817168 kubelet[3265]: I1108 00:25:36.816864 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-036966ce4d" podStartSLOduration=0.816841007 podStartE2EDuration="816.841007ms" podCreationTimestamp="2025-11-08 00:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:36.80419505 +0000 UTC m=+1.228236692" watchObservedRunningTime="2025-11-08 00:25:36.816841007 +0000 UTC m=+1.240882649" Nov 8 00:25:36.817168 kubelet[3265]: I1108 00:25:36.817041 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-036966ce4d" podStartSLOduration=0.817033808 podStartE2EDuration="817.033808ms" podCreationTimestamp="2025-11-08 00:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:36.816668107 +0000 UTC m=+1.240709849" watchObservedRunningTime="2025-11-08 00:25:36.817033808 +0000 UTC m=+1.241075450" Nov 8 00:25:41.304967 kubelet[3265]: I1108 00:25:41.304797 3265 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:25:41.305802 containerd[1736]: time="2025-11-08T00:25:41.305752957Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:25:41.306169 kubelet[3265]: I1108 00:25:41.305971 3265 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:25:41.939693 systemd[1]: Created slice kubepods-besteffort-pod2f7018aa_635f_4e32_a850_7896ad4f11e3.slice - libcontainer container kubepods-besteffort-pod2f7018aa_635f_4e32_a850_7896ad4f11e3.slice. Nov 8 00:25:42.105082 kubelet[3265]: I1108 00:25:42.105040 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f7018aa-635f-4e32-a850-7896ad4f11e3-kube-proxy\") pod \"kube-proxy-r9pz7\" (UID: \"2f7018aa-635f-4e32-a850-7896ad4f11e3\") " pod="kube-system/kube-proxy-r9pz7" Nov 8 00:25:42.105265 kubelet[3265]: I1108 00:25:42.105109 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f7018aa-635f-4e32-a850-7896ad4f11e3-xtables-lock\") pod \"kube-proxy-r9pz7\" (UID: \"2f7018aa-635f-4e32-a850-7896ad4f11e3\") " pod="kube-system/kube-proxy-r9pz7" Nov 8 00:25:42.105265 kubelet[3265]: I1108 00:25:42.105136 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f7018aa-635f-4e32-a850-7896ad4f11e3-lib-modules\") pod \"kube-proxy-r9pz7\" (UID: \"2f7018aa-635f-4e32-a850-7896ad4f11e3\") " pod="kube-system/kube-proxy-r9pz7" Nov 8 00:25:42.105265 kubelet[3265]: I1108 00:25:42.105217 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdgwf\" (UniqueName: \"kubernetes.io/projected/2f7018aa-635f-4e32-a850-7896ad4f11e3-kube-api-access-rdgwf\") pod \"kube-proxy-r9pz7\" (UID: \"2f7018aa-635f-4e32-a850-7896ad4f11e3\") " pod="kube-system/kube-proxy-r9pz7" Nov 8 00:25:42.211119 kubelet[3265]: E1108 00:25:42.210665 3265 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 8 00:25:42.211119 kubelet[3265]: E1108 00:25:42.210719 3265 projected.go:194] Error preparing data for projected volume kube-api-access-rdgwf for pod kube-system/kube-proxy-r9pz7: configmap "kube-root-ca.crt" not found Nov 8 00:25:42.211119 kubelet[3265]: E1108 00:25:42.210814 3265 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2f7018aa-635f-4e32-a850-7896ad4f11e3-kube-api-access-rdgwf podName:2f7018aa-635f-4e32-a850-7896ad4f11e3 nodeName:}" failed. No retries permitted until 2025-11-08 00:25:42.710775782 +0000 UTC m=+7.134817424 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rdgwf" (UniqueName: "kubernetes.io/projected/2f7018aa-635f-4e32-a850-7896ad4f11e3-kube-api-access-rdgwf") pod "kube-proxy-r9pz7" (UID: "2f7018aa-635f-4e32-a850-7896ad4f11e3") : configmap "kube-root-ca.crt" not found Nov 8 00:25:42.542958 systemd[1]: Created slice kubepods-besteffort-pod1915f52f_9a2e_469a_95d2_d3c702e7a962.slice - libcontainer container kubepods-besteffort-pod1915f52f_9a2e_469a_95d2_d3c702e7a962.slice. Nov 8 00:25:42.709438 kubelet[3265]: I1108 00:25:42.709382 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1915f52f-9a2e-469a-95d2-d3c702e7a962-var-lib-calico\") pod \"tigera-operator-7dcd859c48-5jf7n\" (UID: \"1915f52f-9a2e-469a-95d2-d3c702e7a962\") " pod="tigera-operator/tigera-operator-7dcd859c48-5jf7n" Nov 8 00:25:42.709438 kubelet[3265]: I1108 00:25:42.709443 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b95gf\" (UniqueName: \"kubernetes.io/projected/1915f52f-9a2e-469a-95d2-d3c702e7a962-kube-api-access-b95gf\") pod \"tigera-operator-7dcd859c48-5jf7n\" (UID: \"1915f52f-9a2e-469a-95d2-d3c702e7a962\") " pod="tigera-operator/tigera-operator-7dcd859c48-5jf7n" Nov 8 00:25:42.849844 containerd[1736]: time="2025-11-08T00:25:42.849195810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5jf7n,Uid:1915f52f-9a2e-469a-95d2-d3c702e7a962,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:25:42.851211 containerd[1736]: time="2025-11-08T00:25:42.850508018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9pz7,Uid:2f7018aa-635f-4e32-a850-7896ad4f11e3,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:42.914180 containerd[1736]: time="2025-11-08T00:25:42.910949780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:42.914180 containerd[1736]: time="2025-11-08T00:25:42.911012880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:42.914180 containerd[1736]: time="2025-11-08T00:25:42.911036480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:42.914180 containerd[1736]: time="2025-11-08T00:25:42.911121881Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:42.938975 containerd[1736]: time="2025-11-08T00:25:42.937994242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:42.938975 containerd[1736]: time="2025-11-08T00:25:42.938056342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:42.938975 containerd[1736]: time="2025-11-08T00:25:42.938076943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:42.938975 containerd[1736]: time="2025-11-08T00:25:42.938159243Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:42.944939 systemd[1]: Started cri-containerd-ff2c0b49daa1fce0825482697d4680013714f71203b213f308cb27233f610d8d.scope - libcontainer container ff2c0b49daa1fce0825482697d4680013714f71203b213f308cb27233f610d8d. Nov 8 00:25:42.963950 systemd[1]: Started cri-containerd-537804d14e8ef9f3f993273d157eb542666b25c9462cda47a34fda8f37ce3651.scope - libcontainer container 537804d14e8ef9f3f993273d157eb542666b25c9462cda47a34fda8f37ce3651. Nov 8 00:25:43.016503 containerd[1736]: time="2025-11-08T00:25:43.016334912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r9pz7,Uid:2f7018aa-635f-4e32-a850-7896ad4f11e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"537804d14e8ef9f3f993273d157eb542666b25c9462cda47a34fda8f37ce3651\"" Nov 8 00:25:43.043461 containerd[1736]: time="2025-11-08T00:25:43.043404074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-5jf7n,Uid:1915f52f-9a2e-469a-95d2-d3c702e7a962,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ff2c0b49daa1fce0825482697d4680013714f71203b213f308cb27233f610d8d\"" Nov 8 00:25:43.045623 containerd[1736]: time="2025-11-08T00:25:43.045591287Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:25:43.048850 containerd[1736]: time="2025-11-08T00:25:43.048815406Z" level=info msg="CreateContainer within sandbox \"537804d14e8ef9f3f993273d157eb542666b25c9462cda47a34fda8f37ce3651\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:25:43.087109 containerd[1736]: time="2025-11-08T00:25:43.087057936Z" level=info msg="CreateContainer within sandbox \"537804d14e8ef9f3f993273d157eb542666b25c9462cda47a34fda8f37ce3651\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9fb9c893085be629bdc5cb8183ed1efc0c1c903da91eea53e45ffadf65212b61\"" Nov 8 00:25:43.087883 containerd[1736]: time="2025-11-08T00:25:43.087830040Z" level=info msg="StartContainer for \"9fb9c893085be629bdc5cb8183ed1efc0c1c903da91eea53e45ffadf65212b61\"" Nov 8 00:25:43.121879 systemd[1]: Started cri-containerd-9fb9c893085be629bdc5cb8183ed1efc0c1c903da91eea53e45ffadf65212b61.scope - libcontainer container 9fb9c893085be629bdc5cb8183ed1efc0c1c903da91eea53e45ffadf65212b61. Nov 8 00:25:43.151497 containerd[1736]: time="2025-11-08T00:25:43.151389821Z" level=info msg="StartContainer for \"9fb9c893085be629bdc5cb8183ed1efc0c1c903da91eea53e45ffadf65212b61\" returns successfully" Nov 8 00:25:43.931155 kubelet[3265]: I1108 00:25:43.930919 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r9pz7" podStartSLOduration=2.9309008949999997 podStartE2EDuration="2.930900895s" podCreationTimestamp="2025-11-08 00:25:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:43.775964566 +0000 UTC m=+8.200006208" watchObservedRunningTime="2025-11-08 00:25:43.930900895 +0000 UTC m=+8.354942537" Nov 8 00:25:44.536383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2165759561.mount: Deactivated successfully. Nov 8 00:25:45.325842 containerd[1736]: time="2025-11-08T00:25:45.324821951Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:45.326985 containerd[1736]: time="2025-11-08T00:25:45.326884464Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:25:45.337413 containerd[1736]: time="2025-11-08T00:25:45.337341027Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:45.341856 containerd[1736]: time="2025-11-08T00:25:45.341784053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:45.343333 containerd[1736]: time="2025-11-08T00:25:45.342735359Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.297099072s" Nov 8 00:25:45.343333 containerd[1736]: time="2025-11-08T00:25:45.342784659Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:25:45.350795 containerd[1736]: time="2025-11-08T00:25:45.350751307Z" level=info msg="CreateContainer within sandbox \"ff2c0b49daa1fce0825482697d4680013714f71203b213f308cb27233f610d8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:25:45.389994 containerd[1736]: time="2025-11-08T00:25:45.389943542Z" level=info msg="CreateContainer within sandbox \"ff2c0b49daa1fce0825482697d4680013714f71203b213f308cb27233f610d8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7b7ad128a0d2630eb254b3c47e7fe81695c0fa634679f51d3672193f4f8b961f\"" Nov 8 00:25:45.390907 containerd[1736]: time="2025-11-08T00:25:45.390845247Z" level=info msg="StartContainer for \"7b7ad128a0d2630eb254b3c47e7fe81695c0fa634679f51d3672193f4f8b961f\"" Nov 8 00:25:45.426876 systemd[1]: Started cri-containerd-7b7ad128a0d2630eb254b3c47e7fe81695c0fa634679f51d3672193f4f8b961f.scope - libcontainer container 7b7ad128a0d2630eb254b3c47e7fe81695c0fa634679f51d3672193f4f8b961f. Nov 8 00:25:45.455594 containerd[1736]: time="2025-11-08T00:25:45.455439435Z" level=info msg="StartContainer for \"7b7ad128a0d2630eb254b3c47e7fe81695c0fa634679f51d3672193f4f8b961f\" returns successfully" Nov 8 00:25:49.517340 kubelet[3265]: I1108 00:25:49.517080 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-5jf7n" podStartSLOduration=5.2180380490000005 podStartE2EDuration="7.517057832s" podCreationTimestamp="2025-11-08 00:25:42 +0000 UTC" firstStartedPulling="2025-11-08 00:25:43.045014384 +0000 UTC m=+7.469056026" lastFinishedPulling="2025-11-08 00:25:45.344034167 +0000 UTC m=+9.768075809" observedRunningTime="2025-11-08 00:25:45.79166965 +0000 UTC m=+10.215711392" watchObservedRunningTime="2025-11-08 00:25:49.517057832 +0000 UTC m=+13.941099474" Nov 8 00:25:51.757238 sudo[2241]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:51.862691 sshd[2238]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:51.869155 systemd-logind[1708]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:25:51.871344 systemd[1]: sshd@6-10.200.8.41:22-10.200.16.10:59766.service: Deactivated successfully. Nov 8 00:25:51.874232 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:25:51.874522 systemd[1]: session-9.scope: Consumed 5.636s CPU time, 157.6M memory peak, 0B memory swap peak. Nov 8 00:25:51.877787 systemd-logind[1708]: Removed session 9. Nov 8 00:25:58.083862 systemd[1]: Created slice kubepods-besteffort-pode4c78266_90ff_4ad7_8ff8_0e9e4f785a00.slice - libcontainer container kubepods-besteffort-pode4c78266_90ff_4ad7_8ff8_0e9e4f785a00.slice. Nov 8 00:25:58.214582 kubelet[3265]: I1108 00:25:58.214428 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4c78266-90ff-4ad7-8ff8-0e9e4f785a00-tigera-ca-bundle\") pod \"calico-typha-76c6b87ff8-8nn9h\" (UID: \"e4c78266-90ff-4ad7-8ff8-0e9e4f785a00\") " pod="calico-system/calico-typha-76c6b87ff8-8nn9h" Nov 8 00:25:58.214582 kubelet[3265]: I1108 00:25:58.214476 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e4c78266-90ff-4ad7-8ff8-0e9e4f785a00-typha-certs\") pod \"calico-typha-76c6b87ff8-8nn9h\" (UID: \"e4c78266-90ff-4ad7-8ff8-0e9e4f785a00\") " pod="calico-system/calico-typha-76c6b87ff8-8nn9h" Nov 8 00:25:58.214582 kubelet[3265]: I1108 00:25:58.214501 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlffw\" (UniqueName: \"kubernetes.io/projected/e4c78266-90ff-4ad7-8ff8-0e9e4f785a00-kube-api-access-xlffw\") pod \"calico-typha-76c6b87ff8-8nn9h\" (UID: \"e4c78266-90ff-4ad7-8ff8-0e9e4f785a00\") " pod="calico-system/calico-typha-76c6b87ff8-8nn9h" Nov 8 00:25:58.300649 systemd[1]: Created slice kubepods-besteffort-pod8f036417_4b1f_4eb7_b2de_6addaad2d93f.slice - libcontainer container kubepods-besteffort-pod8f036417_4b1f_4eb7_b2de_6addaad2d93f.slice. Nov 8 00:25:58.391852 containerd[1736]: time="2025-11-08T00:25:58.391813436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c6b87ff8-8nn9h,Uid:e4c78266-90ff-4ad7-8ff8-0e9e4f785a00,Namespace:calico-system,Attempt:0,}" Nov 8 00:25:58.416805 kubelet[3265]: I1108 00:25:58.415791 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vm9w\" (UniqueName: \"kubernetes.io/projected/8f036417-4b1f-4eb7-b2de-6addaad2d93f-kube-api-access-8vm9w\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.416805 kubelet[3265]: I1108 00:25:58.415844 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-cni-log-dir\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.416805 kubelet[3265]: I1108 00:25:58.415868 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-policysync\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.416805 kubelet[3265]: I1108 00:25:58.415890 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-var-run-calico\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.416805 kubelet[3265]: I1108 00:25:58.415918 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-var-lib-calico\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417142 kubelet[3265]: I1108 00:25:58.415939 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-flexvol-driver-host\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417142 kubelet[3265]: I1108 00:25:58.415969 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-lib-modules\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417142 kubelet[3265]: I1108 00:25:58.415993 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-xtables-lock\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417142 kubelet[3265]: I1108 00:25:58.416017 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8f036417-4b1f-4eb7-b2de-6addaad2d93f-node-certs\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417142 kubelet[3265]: I1108 00:25:58.416038 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-cni-net-dir\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417342 kubelet[3265]: I1108 00:25:58.416061 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f036417-4b1f-4eb7-b2de-6addaad2d93f-tigera-ca-bundle\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.417342 kubelet[3265]: I1108 00:25:58.416084 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8f036417-4b1f-4eb7-b2de-6addaad2d93f-cni-bin-dir\") pod \"calico-node-97vqn\" (UID: \"8f036417-4b1f-4eb7-b2de-6addaad2d93f\") " pod="calico-system/calico-node-97vqn" Nov 8 00:25:58.424619 kubelet[3265]: E1108 00:25:58.424352 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:25:58.456860 containerd[1736]: time="2025-11-08T00:25:58.456630825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:58.457044 containerd[1736]: time="2025-11-08T00:25:58.456911327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:58.457044 containerd[1736]: time="2025-11-08T00:25:58.456971328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:58.459020 containerd[1736]: time="2025-11-08T00:25:58.458926739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:58.488925 systemd[1]: Started cri-containerd-a08df3a5d820b2f059e6058e1e5eb801ef47932fa8e48718011feb7287e8308a.scope - libcontainer container a08df3a5d820b2f059e6058e1e5eb801ef47932fa8e48718011feb7287e8308a. Nov 8 00:25:58.516738 kubelet[3265]: I1108 00:25:58.516635 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/88161698-8450-46cd-aabf-3650fadd565e-socket-dir\") pod \"csi-node-driver-wpv6d\" (UID: \"88161698-8450-46cd-aabf-3650fadd565e\") " pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:25:58.516738 kubelet[3265]: I1108 00:25:58.516683 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/88161698-8450-46cd-aabf-3650fadd565e-varrun\") pod \"csi-node-driver-wpv6d\" (UID: \"88161698-8450-46cd-aabf-3650fadd565e\") " pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:25:58.516947 kubelet[3265]: I1108 00:25:58.516795 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/88161698-8450-46cd-aabf-3650fadd565e-registration-dir\") pod \"csi-node-driver-wpv6d\" (UID: \"88161698-8450-46cd-aabf-3650fadd565e\") " pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:25:58.516947 kubelet[3265]: I1108 00:25:58.516825 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56rtw\" (UniqueName: \"kubernetes.io/projected/88161698-8450-46cd-aabf-3650fadd565e-kube-api-access-56rtw\") pod \"csi-node-driver-wpv6d\" (UID: \"88161698-8450-46cd-aabf-3650fadd565e\") " pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:25:58.516947 kubelet[3265]: I1108 00:25:58.516901 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/88161698-8450-46cd-aabf-3650fadd565e-kubelet-dir\") pod \"csi-node-driver-wpv6d\" (UID: \"88161698-8450-46cd-aabf-3650fadd565e\") " pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:25:58.519285 kubelet[3265]: E1108 00:25:58.519254 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.519285 kubelet[3265]: W1108 00:25:58.519282 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.519450 kubelet[3265]: E1108 00:25:58.519322 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.521219 kubelet[3265]: E1108 00:25:58.520729 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.521219 kubelet[3265]: W1108 00:25:58.520847 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.521219 kubelet[3265]: E1108 00:25:58.520878 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.523858 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.524717 kubelet[3265]: W1108 00:25:58.523891 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.523913 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.524239 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.524717 kubelet[3265]: W1108 00:25:58.524252 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.524266 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.524632 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.524717 kubelet[3265]: W1108 00:25:58.524644 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.524717 kubelet[3265]: E1108 00:25:58.524658 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.526742 kubelet[3265]: E1108 00:25:58.526240 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.526742 kubelet[3265]: W1108 00:25:58.526256 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.526742 kubelet[3265]: E1108 00:25:58.526373 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.527457 kubelet[3265]: E1108 00:25:58.527292 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.527457 kubelet[3265]: W1108 00:25:58.527320 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.527457 kubelet[3265]: E1108 00:25:58.527335 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.528713 kubelet[3265]: E1108 00:25:58.527989 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.528713 kubelet[3265]: W1108 00:25:58.528038 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.528713 kubelet[3265]: E1108 00:25:58.528054 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.528713 kubelet[3265]: E1108 00:25:58.528606 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.528713 kubelet[3265]: W1108 00:25:58.528637 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.528713 kubelet[3265]: E1108 00:25:58.528651 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.529084 kubelet[3265]: E1108 00:25:58.529065 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.529084 kubelet[3265]: W1108 00:25:58.529084 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.529179 kubelet[3265]: E1108 00:25:58.529098 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.529724 kubelet[3265]: E1108 00:25:58.529516 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.529724 kubelet[3265]: W1108 00:25:58.529530 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.529724 kubelet[3265]: E1108 00:25:58.529665 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.530730 kubelet[3265]: E1108 00:25:58.530120 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.530730 kubelet[3265]: W1108 00:25:58.530136 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.530730 kubelet[3265]: E1108 00:25:58.530265 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.530910 kubelet[3265]: E1108 00:25:58.530873 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.530910 kubelet[3265]: W1108 00:25:58.530886 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.530993 kubelet[3265]: E1108 00:25:58.530923 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.531724 kubelet[3265]: E1108 00:25:58.531322 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.531724 kubelet[3265]: W1108 00:25:58.531362 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.531724 kubelet[3265]: E1108 00:25:58.531376 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.531925 kubelet[3265]: E1108 00:25:58.531866 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.531980 kubelet[3265]: W1108 00:25:58.531927 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.531980 kubelet[3265]: E1108 00:25:58.531943 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.533716 kubelet[3265]: E1108 00:25:58.532760 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.533716 kubelet[3265]: W1108 00:25:58.532776 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.533716 kubelet[3265]: E1108 00:25:58.532790 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.534267 kubelet[3265]: E1108 00:25:58.534240 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.534267 kubelet[3265]: W1108 00:25:58.534260 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.534383 kubelet[3265]: E1108 00:25:58.534275 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.534764 kubelet[3265]: E1108 00:25:58.534560 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.534764 kubelet[3265]: W1108 00:25:58.534584 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.534764 kubelet[3265]: E1108 00:25:58.534600 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.534918 kubelet[3265]: E1108 00:25:58.534852 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.534918 kubelet[3265]: W1108 00:25:58.534864 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.534918 kubelet[3265]: E1108 00:25:58.534876 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.535724 kubelet[3265]: E1108 00:25:58.535130 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.535724 kubelet[3265]: W1108 00:25:58.535144 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.535724 kubelet[3265]: E1108 00:25:58.535156 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.535724 kubelet[3265]: E1108 00:25:58.535422 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.535724 kubelet[3265]: W1108 00:25:58.535434 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.535724 kubelet[3265]: E1108 00:25:58.535448 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.536509 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.539570 kubelet[3265]: W1108 00:25:58.536535 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.536555 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.536838 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.539570 kubelet[3265]: W1108 00:25:58.536848 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.537174 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.537419 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.539570 kubelet[3265]: W1108 00:25:58.537431 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.537444 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.539570 kubelet[3265]: E1108 00:25:58.537631 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.540183 kubelet[3265]: W1108 00:25:58.537642 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.540183 kubelet[3265]: E1108 00:25:58.537654 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.540183 kubelet[3265]: E1108 00:25:58.539562 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.540183 kubelet[3265]: W1108 00:25:58.539575 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.540183 kubelet[3265]: E1108 00:25:58.539736 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.540387 kubelet[3265]: E1108 00:25:58.540214 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.540387 kubelet[3265]: W1108 00:25:58.540226 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.540387 kubelet[3265]: E1108 00:25:58.540279 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.546587 kubelet[3265]: E1108 00:25:58.546239 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.546587 kubelet[3265]: W1108 00:25:58.546268 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.546587 kubelet[3265]: E1108 00:25:58.546284 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.554948 kubelet[3265]: E1108 00:25:58.554920 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.554948 kubelet[3265]: W1108 00:25:58.554939 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.555127 kubelet[3265]: E1108 00:25:58.554959 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.595910 containerd[1736]: time="2025-11-08T00:25:58.595861362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c6b87ff8-8nn9h,Uid:e4c78266-90ff-4ad7-8ff8-0e9e4f785a00,Namespace:calico-system,Attempt:0,} returns sandbox id \"a08df3a5d820b2f059e6058e1e5eb801ef47932fa8e48718011feb7287e8308a\"" Nov 8 00:25:58.598060 containerd[1736]: time="2025-11-08T00:25:58.598018975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:25:58.606661 containerd[1736]: time="2025-11-08T00:25:58.606619027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97vqn,Uid:8f036417-4b1f-4eb7-b2de-6addaad2d93f,Namespace:calico-system,Attempt:0,}" Nov 8 00:25:58.617999 kubelet[3265]: E1108 00:25:58.617968 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.617999 kubelet[3265]: W1108 00:25:58.617992 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.618323 kubelet[3265]: E1108 00:25:58.618018 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.619063 kubelet[3265]: E1108 00:25:58.618829 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.619063 kubelet[3265]: W1108 00:25:58.618855 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.619063 kubelet[3265]: E1108 00:25:58.618876 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.619406 kubelet[3265]: E1108 00:25:58.619222 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.619406 kubelet[3265]: W1108 00:25:58.619236 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.619406 kubelet[3265]: E1108 00:25:58.619250 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.619406 kubelet[3265]: E1108 00:25:58.619534 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.620285 kubelet[3265]: W1108 00:25:58.619546 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.620285 kubelet[3265]: E1108 00:25:58.619563 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.620285 kubelet[3265]: E1108 00:25:58.619903 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.620285 kubelet[3265]: W1108 00:25:58.619916 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.620285 kubelet[3265]: E1108 00:25:58.619929 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.620675 kubelet[3265]: E1108 00:25:58.620640 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.620675 kubelet[3265]: W1108 00:25:58.620659 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.620675 kubelet[3265]: E1108 00:25:58.620673 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.621002 kubelet[3265]: E1108 00:25:58.620955 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.621002 kubelet[3265]: W1108 00:25:58.620991 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.621153 kubelet[3265]: E1108 00:25:58.621005 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.621442 kubelet[3265]: E1108 00:25:58.621319 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.621442 kubelet[3265]: W1108 00:25:58.621332 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.621442 kubelet[3265]: E1108 00:25:58.621346 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.621721 kubelet[3265]: E1108 00:25:58.621625 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.621721 kubelet[3265]: W1108 00:25:58.621636 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.621721 kubelet[3265]: E1108 00:25:58.621649 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622041 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623032 kubelet[3265]: W1108 00:25:58.622069 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622083 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622352 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623032 kubelet[3265]: W1108 00:25:58.622364 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622396 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622660 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623032 kubelet[3265]: W1108 00:25:58.622672 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.622684 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623032 kubelet[3265]: E1108 00:25:58.623004 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623953 kubelet[3265]: W1108 00:25:58.623016 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623953 kubelet[3265]: E1108 00:25:58.623029 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623953 kubelet[3265]: E1108 00:25:58.623333 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623953 kubelet[3265]: W1108 00:25:58.623344 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623953 kubelet[3265]: E1108 00:25:58.623357 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.623953 kubelet[3265]: E1108 00:25:58.623631 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.623953 kubelet[3265]: W1108 00:25:58.623644 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.623953 kubelet[3265]: E1108 00:25:58.623666 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.624761 kubelet[3265]: E1108 00:25:58.624216 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.624761 kubelet[3265]: W1108 00:25:58.624232 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.624761 kubelet[3265]: E1108 00:25:58.624245 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.624761 kubelet[3265]: E1108 00:25:58.624497 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.624761 kubelet[3265]: W1108 00:25:58.624508 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.624761 kubelet[3265]: E1108 00:25:58.624520 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.624886 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.625510 kubelet[3265]: W1108 00:25:58.624898 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.624912 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.625160 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.625510 kubelet[3265]: W1108 00:25:58.625171 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.625184 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.625369 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.625510 kubelet[3265]: W1108 00:25:58.625378 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.625510 kubelet[3265]: E1108 00:25:58.625389 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.626520 kubelet[3265]: E1108 00:25:58.625767 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.626520 kubelet[3265]: W1108 00:25:58.625778 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.626520 kubelet[3265]: E1108 00:25:58.625792 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.626520 kubelet[3265]: E1108 00:25:58.626045 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.626520 kubelet[3265]: W1108 00:25:58.626056 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.626520 kubelet[3265]: E1108 00:25:58.626067 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.626539 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.629012 kubelet[3265]: W1108 00:25:58.626553 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.626575 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.626909 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.629012 kubelet[3265]: W1108 00:25:58.626920 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.626933 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.627230 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.629012 kubelet[3265]: W1108 00:25:58.627241 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.629012 kubelet[3265]: E1108 00:25:58.627262 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.638846 kubelet[3265]: E1108 00:25:58.638812 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:58.638846 kubelet[3265]: W1108 00:25:58.638840 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:58.638999 kubelet[3265]: E1108 00:25:58.638863 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:58.652958 containerd[1736]: time="2025-11-08T00:25:58.652232901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:58.653238 containerd[1736]: time="2025-11-08T00:25:58.653119006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:58.653238 containerd[1736]: time="2025-11-08T00:25:58.653147206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:58.653560 containerd[1736]: time="2025-11-08T00:25:58.653485708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:58.686929 systemd[1]: Started cri-containerd-b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f.scope - libcontainer container b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f. Nov 8 00:25:58.716097 containerd[1736]: time="2025-11-08T00:25:58.716049684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97vqn,Uid:8f036417-4b1f-4eb7-b2de-6addaad2d93f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\"" Nov 8 00:25:59.718742 kubelet[3265]: E1108 00:25:59.717288 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:25:59.846355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1172623152.mount: Deactivated successfully. Nov 8 00:26:00.886520 containerd[1736]: time="2025-11-08T00:26:00.886467627Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:00.889187 containerd[1736]: time="2025-11-08T00:26:00.889045343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:26:00.891730 containerd[1736]: time="2025-11-08T00:26:00.891544458Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:00.897729 containerd[1736]: time="2025-11-08T00:26:00.896129885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:00.898433 containerd[1736]: time="2025-11-08T00:26:00.898384499Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.300322824s" Nov 8 00:26:00.898433 containerd[1736]: time="2025-11-08T00:26:00.898432099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:26:00.902145 containerd[1736]: time="2025-11-08T00:26:00.902119721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:26:00.932353 containerd[1736]: time="2025-11-08T00:26:00.932310903Z" level=info msg="CreateContainer within sandbox \"a08df3a5d820b2f059e6058e1e5eb801ef47932fa8e48718011feb7287e8308a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:26:00.967971 containerd[1736]: time="2025-11-08T00:26:00.967918717Z" level=info msg="CreateContainer within sandbox \"a08df3a5d820b2f059e6058e1e5eb801ef47932fa8e48718011feb7287e8308a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ea0ba0ef68bd5e04a79259031d9565c7a1e85d9f0bd192bfb36624d6d0966323\"" Nov 8 00:26:00.968818 containerd[1736]: time="2025-11-08T00:26:00.968580621Z" level=info msg="StartContainer for \"ea0ba0ef68bd5e04a79259031d9565c7a1e85d9f0bd192bfb36624d6d0966323\"" Nov 8 00:26:01.001874 systemd[1]: Started cri-containerd-ea0ba0ef68bd5e04a79259031d9565c7a1e85d9f0bd192bfb36624d6d0966323.scope - libcontainer container ea0ba0ef68bd5e04a79259031d9565c7a1e85d9f0bd192bfb36624d6d0966323. Nov 8 00:26:01.058212 containerd[1736]: time="2025-11-08T00:26:01.058149059Z" level=info msg="StartContainer for \"ea0ba0ef68bd5e04a79259031d9565c7a1e85d9f0bd192bfb36624d6d0966323\" returns successfully" Nov 8 00:26:01.719007 kubelet[3265]: E1108 00:26:01.717938 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:01.844260 kubelet[3265]: E1108 00:26:01.844221 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.844260 kubelet[3265]: W1108 00:26:01.844253 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.844654 kubelet[3265]: E1108 00:26:01.844280 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.844654 kubelet[3265]: E1108 00:26:01.844508 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.844654 kubelet[3265]: W1108 00:26:01.844520 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.844654 kubelet[3265]: E1108 00:26:01.844533 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845005 kubelet[3265]: E1108 00:26:01.844751 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845005 kubelet[3265]: W1108 00:26:01.844762 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.845005 kubelet[3265]: E1108 00:26:01.844776 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845299 kubelet[3265]: E1108 00:26:01.845027 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845299 kubelet[3265]: W1108 00:26:01.845039 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.845299 kubelet[3265]: E1108 00:26:01.845056 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845481 kubelet[3265]: E1108 00:26:01.845302 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845481 kubelet[3265]: W1108 00:26:01.845313 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.845481 kubelet[3265]: E1108 00:26:01.845327 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845607 kubelet[3265]: E1108 00:26:01.845517 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845607 kubelet[3265]: W1108 00:26:01.845526 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.845607 kubelet[3265]: E1108 00:26:01.845538 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845762 kubelet[3265]: E1108 00:26:01.845722 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845762 kubelet[3265]: W1108 00:26:01.845732 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.845762 kubelet[3265]: E1108 00:26:01.845746 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.845964 kubelet[3265]: E1108 00:26:01.845943 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.845964 kubelet[3265]: W1108 00:26:01.845958 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.846148 kubelet[3265]: E1108 00:26:01.845971 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.846227 kubelet[3265]: E1108 00:26:01.846191 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.846227 kubelet[3265]: W1108 00:26:01.846202 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.846227 kubelet[3265]: E1108 00:26:01.846215 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.846861 kubelet[3265]: E1108 00:26:01.846462 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.846861 kubelet[3265]: W1108 00:26:01.846475 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.846861 kubelet[3265]: E1108 00:26:01.846488 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.846861 kubelet[3265]: E1108 00:26:01.846678 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.846861 kubelet[3265]: W1108 00:26:01.846689 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.846861 kubelet[3265]: E1108 00:26:01.846725 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.847125 kubelet[3265]: E1108 00:26:01.846923 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.847125 kubelet[3265]: W1108 00:26:01.846934 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.847125 kubelet[3265]: E1108 00:26:01.846946 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.847264 kubelet[3265]: E1108 00:26:01.847129 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.847264 kubelet[3265]: W1108 00:26:01.847138 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.847264 kubelet[3265]: E1108 00:26:01.847151 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.847391 kubelet[3265]: E1108 00:26:01.847320 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.847391 kubelet[3265]: W1108 00:26:01.847329 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.847391 kubelet[3265]: E1108 00:26:01.847340 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.847531 kubelet[3265]: E1108 00:26:01.847508 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.847531 kubelet[3265]: W1108 00:26:01.847517 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.847531 kubelet[3265]: E1108 00:26:01.847528 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.944559 kubelet[3265]: E1108 00:26:01.944529 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.944559 kubelet[3265]: W1108 00:26:01.944549 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.944827 kubelet[3265]: E1108 00:26:01.944572 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.944939 kubelet[3265]: E1108 00:26:01.944919 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.944992 kubelet[3265]: W1108 00:26:01.944936 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.944992 kubelet[3265]: E1108 00:26:01.944970 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.945309 kubelet[3265]: E1108 00:26:01.945287 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.945309 kubelet[3265]: W1108 00:26:01.945302 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.945473 kubelet[3265]: E1108 00:26:01.945316 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.945608 kubelet[3265]: E1108 00:26:01.945592 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.945608 kubelet[3265]: W1108 00:26:01.945604 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.945751 kubelet[3265]: E1108 00:26:01.945618 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.945895 kubelet[3265]: E1108 00:26:01.945872 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.945895 kubelet[3265]: W1108 00:26:01.945889 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.946014 kubelet[3265]: E1108 00:26:01.945904 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.946241 kubelet[3265]: E1108 00:26:01.946224 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.946241 kubelet[3265]: W1108 00:26:01.946238 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.946418 kubelet[3265]: E1108 00:26:01.946251 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.946938 kubelet[3265]: E1108 00:26:01.946918 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.946938 kubelet[3265]: W1108 00:26:01.946934 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.947133 kubelet[3265]: E1108 00:26:01.946949 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.947235 kubelet[3265]: E1108 00:26:01.947215 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.947235 kubelet[3265]: W1108 00:26:01.947232 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.947374 kubelet[3265]: E1108 00:26:01.947246 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.947479 kubelet[3265]: E1108 00:26:01.947466 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.947539 kubelet[3265]: W1108 00:26:01.947482 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.947539 kubelet[3265]: E1108 00:26:01.947495 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.947788 kubelet[3265]: E1108 00:26:01.947768 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.947788 kubelet[3265]: W1108 00:26:01.947783 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.947932 kubelet[3265]: E1108 00:26:01.947796 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.948423 kubelet[3265]: E1108 00:26:01.948330 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.948423 kubelet[3265]: W1108 00:26:01.948347 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.948423 kubelet[3265]: E1108 00:26:01.948361 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.948957 kubelet[3265]: E1108 00:26:01.948612 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.948957 kubelet[3265]: W1108 00:26:01.948626 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.948957 kubelet[3265]: E1108 00:26:01.948787 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.949229 kubelet[3265]: E1108 00:26:01.949215 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.949229 kubelet[3265]: W1108 00:26:01.949227 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.949336 kubelet[3265]: E1108 00:26:01.949241 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.950241 kubelet[3265]: E1108 00:26:01.950223 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.950241 kubelet[3265]: W1108 00:26:01.950236 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.950449 kubelet[3265]: E1108 00:26:01.950251 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.950532 kubelet[3265]: E1108 00:26:01.950495 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.950532 kubelet[3265]: W1108 00:26:01.950506 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.950532 kubelet[3265]: E1108 00:26:01.950520 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.951296 kubelet[3265]: E1108 00:26:01.950887 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.951296 kubelet[3265]: W1108 00:26:01.950899 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.951296 kubelet[3265]: E1108 00:26:01.950932 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.951787 kubelet[3265]: E1108 00:26:01.951769 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.951787 kubelet[3265]: W1108 00:26:01.951783 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.951913 kubelet[3265]: E1108 00:26:01.951796 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:01.952090 kubelet[3265]: E1108 00:26:01.952071 3265 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:26:01.952090 kubelet[3265]: W1108 00:26:01.952086 3265 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:26:01.952177 kubelet[3265]: E1108 00:26:01.952100 3265 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:26:02.268849 containerd[1736]: time="2025-11-08T00:26:02.267890829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:02.270707 containerd[1736]: time="2025-11-08T00:26:02.270648345Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:26:02.273274 containerd[1736]: time="2025-11-08T00:26:02.273208560Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:02.292748 containerd[1736]: time="2025-11-08T00:26:02.292665977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:02.293787 containerd[1736]: time="2025-11-08T00:26:02.293530683Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.391230361s" Nov 8 00:26:02.293787 containerd[1736]: time="2025-11-08T00:26:02.293575583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:26:02.301156 containerd[1736]: time="2025-11-08T00:26:02.301121128Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:26:02.335008 containerd[1736]: time="2025-11-08T00:26:02.334960332Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd\"" Nov 8 00:26:02.335796 containerd[1736]: time="2025-11-08T00:26:02.335761736Z" level=info msg="StartContainer for \"e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd\"" Nov 8 00:26:02.377876 systemd[1]: Started cri-containerd-e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd.scope - libcontainer container e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd. Nov 8 00:26:02.407858 containerd[1736]: time="2025-11-08T00:26:02.407811769Z" level=info msg="StartContainer for \"e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd\" returns successfully" Nov 8 00:26:02.418526 systemd[1]: cri-containerd-e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd.scope: Deactivated successfully. Nov 8 00:26:02.443523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd-rootfs.mount: Deactivated successfully. Nov 8 00:26:02.808974 kubelet[3265]: I1108 00:26:02.808849 3265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:02.834321 kubelet[3265]: I1108 00:26:02.833784 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c6b87ff8-8nn9h" podStartSLOduration=2.529925384 podStartE2EDuration="4.833761729s" podCreationTimestamp="2025-11-08 00:25:58 +0000 UTC" firstStartedPulling="2025-11-08 00:25:58.597399471 +0000 UTC m=+23.021441213" lastFinishedPulling="2025-11-08 00:26:00.901235916 +0000 UTC m=+25.325277558" observedRunningTime="2025-11-08 00:26:01.819733735 +0000 UTC m=+26.243775377" watchObservedRunningTime="2025-11-08 00:26:02.833761729 +0000 UTC m=+27.257803471" Nov 8 00:26:03.719245 kubelet[3265]: E1108 00:26:03.719205 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:03.931672 containerd[1736]: time="2025-11-08T00:26:03.931608726Z" level=info msg="shim disconnected" id=e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd namespace=k8s.io Nov 8 00:26:03.931672 containerd[1736]: time="2025-11-08T00:26:03.931663127Z" level=warning msg="cleaning up after shim disconnected" id=e4362831fc86e353f0d66edb7fbbde2c363bacd19925ac5deb9526f1a2a4b9dd namespace=k8s.io Nov 8 00:26:03.931672 containerd[1736]: time="2025-11-08T00:26:03.931674527Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:04.817533 containerd[1736]: time="2025-11-08T00:26:04.817438714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:26:05.718671 kubelet[3265]: E1108 00:26:05.718204 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:07.639141 kubelet[3265]: I1108 00:26:07.638873 3265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:07.720090 kubelet[3265]: E1108 00:26:07.720026 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:09.085400 containerd[1736]: time="2025-11-08T00:26:09.085347076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.087576 containerd[1736]: time="2025-11-08T00:26:09.087513089Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:26:09.092345 containerd[1736]: time="2025-11-08T00:26:09.092074316Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.099721 containerd[1736]: time="2025-11-08T00:26:09.097573748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:09.102141 containerd[1736]: time="2025-11-08T00:26:09.101712773Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.284218459s" Nov 8 00:26:09.102141 containerd[1736]: time="2025-11-08T00:26:09.101755173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:26:09.110881 containerd[1736]: time="2025-11-08T00:26:09.110839527Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:26:09.143024 containerd[1736]: time="2025-11-08T00:26:09.142982017Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf\"" Nov 8 00:26:09.143760 containerd[1736]: time="2025-11-08T00:26:09.143687921Z" level=info msg="StartContainer for \"26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf\"" Nov 8 00:26:09.188865 systemd[1]: Started cri-containerd-26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf.scope - libcontainer container 26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf. Nov 8 00:26:09.222557 containerd[1736]: time="2025-11-08T00:26:09.222356187Z" level=info msg="StartContainer for \"26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf\" returns successfully" Nov 8 00:26:09.717377 kubelet[3265]: E1108 00:26:09.717309 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:10.814687 systemd[1]: cri-containerd-26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf.scope: Deactivated successfully. Nov 8 00:26:10.847041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf-rootfs.mount: Deactivated successfully. Nov 8 00:26:10.894329 kubelet[3265]: I1108 00:26:10.894296 3265 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:26:11.996528 systemd[1]: Created slice kubepods-burstable-pod96be1a30_6407_4196_820d_f11b48aff85e.slice - libcontainer container kubepods-burstable-pod96be1a30_6407_4196_820d_f11b48aff85e.slice. Nov 8 00:26:12.025200 kubelet[3265]: I1108 00:26:12.025135 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/96be1a30-6407-4196-820d-f11b48aff85e-config-volume\") pod \"coredns-674b8bbfcf-bm6fd\" (UID: \"96be1a30-6407-4196-820d-f11b48aff85e\") " pod="kube-system/coredns-674b8bbfcf-bm6fd" Nov 8 00:26:12.025822 kubelet[3265]: I1108 00:26:12.025258 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52s9n\" (UniqueName: \"kubernetes.io/projected/96be1a30-6407-4196-820d-f11b48aff85e-kube-api-access-52s9n\") pod \"coredns-674b8bbfcf-bm6fd\" (UID: \"96be1a30-6407-4196-820d-f11b48aff85e\") " pod="kube-system/coredns-674b8bbfcf-bm6fd" Nov 8 00:26:12.078132 systemd[1]: Created slice kubepods-burstable-podebfac5a4_d600_45d4_a79b_901e29ad49f6.slice - libcontainer container kubepods-burstable-podebfac5a4_d600_45d4_a79b_901e29ad49f6.slice. Nov 8 00:26:12.080202 containerd[1736]: time="2025-11-08T00:26:12.079533299Z" level=info msg="shim disconnected" id=26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf namespace=k8s.io Nov 8 00:26:12.080202 containerd[1736]: time="2025-11-08T00:26:12.079614599Z" level=warning msg="cleaning up after shim disconnected" id=26ac194150c187b8a405daeb20890db735d968a3cade9bc8bbb1164e0fee6adf namespace=k8s.io Nov 8 00:26:12.080202 containerd[1736]: time="2025-11-08T00:26:12.079628200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:12.099657 systemd[1]: Created slice kubepods-besteffort-podc8f9b682_1403_4984_8b7e_efa798fabe9d.slice - libcontainer container kubepods-besteffort-podc8f9b682_1403_4984_8b7e_efa798fabe9d.slice. Nov 8 00:26:12.126935 containerd[1736]: time="2025-11-08T00:26:12.126157175Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:26:12Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:26:12.137054 kubelet[3265]: I1108 00:26:12.136970 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/5435b4b0-7a30-4d83-a845-ff6ed8ff1797-goldmane-key-pair\") pod \"goldmane-666569f655-9cc5c\" (UID: \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\") " pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.137191 kubelet[3265]: I1108 00:26:12.137093 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296b681-c184-460a-98bd-faf9eec27bec-whisker-ca-bundle\") pod \"whisker-54c754d646-rhhsj\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " pod="calico-system/whisker-54c754d646-rhhsj" Nov 8 00:26:12.138722 kubelet[3265]: I1108 00:26:12.137304 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkn2h\" (UniqueName: \"kubernetes.io/projected/ebfac5a4-d600-45d4-a79b-901e29ad49f6-kube-api-access-tkn2h\") pod \"coredns-674b8bbfcf-xhgqn\" (UID: \"ebfac5a4-d600-45d4-a79b-901e29ad49f6\") " pod="kube-system/coredns-674b8bbfcf-xhgqn" Nov 8 00:26:12.138722 kubelet[3265]: I1108 00:26:12.137542 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9296b681-c184-460a-98bd-faf9eec27bec-whisker-backend-key-pair\") pod \"whisker-54c754d646-rhhsj\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " pod="calico-system/whisker-54c754d646-rhhsj" Nov 8 00:26:12.138722 kubelet[3265]: I1108 00:26:12.137738 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kgh5\" (UniqueName: \"kubernetes.io/projected/4bf1c931-a10c-42c4-bece-79d61b489c62-kube-api-access-9kgh5\") pod \"calico-apiserver-579774f8c5-5sn5r\" (UID: \"4bf1c931-a10c-42c4-bece-79d61b489c62\") " pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" Nov 8 00:26:12.138722 kubelet[3265]: I1108 00:26:12.137904 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsj2l\" (UniqueName: \"kubernetes.io/projected/5435b4b0-7a30-4d83-a845-ff6ed8ff1797-kube-api-access-lsj2l\") pod \"goldmane-666569f655-9cc5c\" (UID: \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\") " pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.138722 kubelet[3265]: I1108 00:26:12.138016 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4bf1c931-a10c-42c4-bece-79d61b489c62-calico-apiserver-certs\") pod \"calico-apiserver-579774f8c5-5sn5r\" (UID: \"4bf1c931-a10c-42c4-bece-79d61b489c62\") " pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" Nov 8 00:26:12.138993 kubelet[3265]: I1108 00:26:12.138219 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5sct\" (UniqueName: \"kubernetes.io/projected/9296b681-c184-460a-98bd-faf9eec27bec-kube-api-access-r5sct\") pod \"whisker-54c754d646-rhhsj\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " pod="calico-system/whisker-54c754d646-rhhsj" Nov 8 00:26:12.138993 kubelet[3265]: I1108 00:26:12.138609 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5nb2\" (UniqueName: \"kubernetes.io/projected/c8f9b682-1403-4984-8b7e-efa798fabe9d-kube-api-access-k5nb2\") pod \"calico-apiserver-847dc76596-p994f\" (UID: \"c8f9b682-1403-4984-8b7e-efa798fabe9d\") " pod="calico-apiserver/calico-apiserver-847dc76596-p994f" Nov 8 00:26:12.139328 kubelet[3265]: I1108 00:26:12.139300 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ebfac5a4-d600-45d4-a79b-901e29ad49f6-config-volume\") pod \"coredns-674b8bbfcf-xhgqn\" (UID: \"ebfac5a4-d600-45d4-a79b-901e29ad49f6\") " pod="kube-system/coredns-674b8bbfcf-xhgqn" Nov 8 00:26:12.139748 kubelet[3265]: I1108 00:26:12.139567 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c8f9b682-1403-4984-8b7e-efa798fabe9d-calico-apiserver-certs\") pod \"calico-apiserver-847dc76596-p994f\" (UID: \"c8f9b682-1403-4984-8b7e-efa798fabe9d\") " pod="calico-apiserver/calico-apiserver-847dc76596-p994f" Nov 8 00:26:12.139835 kubelet[3265]: I1108 00:26:12.139772 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5435b4b0-7a30-4d83-a845-ff6ed8ff1797-config\") pod \"goldmane-666569f655-9cc5c\" (UID: \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\") " pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.139966 kubelet[3265]: I1108 00:26:12.139943 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5435b4b0-7a30-4d83-a845-ff6ed8ff1797-goldmane-ca-bundle\") pod \"goldmane-666569f655-9cc5c\" (UID: \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\") " pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.151760 systemd[1]: Created slice kubepods-besteffort-pod9296b681_c184_460a_98bd_faf9eec27bec.slice - libcontainer container kubepods-besteffort-pod9296b681_c184_460a_98bd_faf9eec27bec.slice. Nov 8 00:26:12.175836 systemd[1]: Created slice kubepods-besteffort-pod88161698_8450_46cd_aabf_3650fadd565e.slice - libcontainer container kubepods-besteffort-pod88161698_8450_46cd_aabf_3650fadd565e.slice. Nov 8 00:26:12.183582 containerd[1736]: time="2025-11-08T00:26:12.182185707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpv6d,Uid:88161698-8450-46cd-aabf-3650fadd565e,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:12.187068 systemd[1]: Created slice kubepods-besteffort-pod4bf1c931_a10c_42c4_bece_79d61b489c62.slice - libcontainer container kubepods-besteffort-pod4bf1c931_a10c_42c4_bece_79d61b489c62.slice. Nov 8 00:26:12.199374 systemd[1]: Created slice kubepods-besteffort-pod5435b4b0_7a30_4d83_a845_ff6ed8ff1797.slice - libcontainer container kubepods-besteffort-pod5435b4b0_7a30_4d83_a845_ff6ed8ff1797.slice. Nov 8 00:26:12.206910 systemd[1]: Created slice kubepods-besteffort-podf9dd6f03_0839_47a7_a4a1_75b8d5be8ef2.slice - libcontainer container kubepods-besteffort-podf9dd6f03_0839_47a7_a4a1_75b8d5be8ef2.slice. Nov 8 00:26:12.238123 systemd[1]: Created slice kubepods-besteffort-pod415c1089_29e6_4262_b21f_188443e0b159.slice - libcontainer container kubepods-besteffort-pod415c1089_29e6_4262_b21f_188443e0b159.slice. Nov 8 00:26:12.241975 kubelet[3265]: I1108 00:26:12.241930 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2-calico-apiserver-certs\") pod \"calico-apiserver-847dc76596-gs868\" (UID: \"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2\") " pod="calico-apiserver/calico-apiserver-847dc76596-gs868" Nov 8 00:26:12.242110 kubelet[3265]: I1108 00:26:12.241996 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/415c1089-29e6-4262-b21f-188443e0b159-tigera-ca-bundle\") pod \"calico-kube-controllers-5ccd97dd97-f9lsk\" (UID: \"415c1089-29e6-4262-b21f-188443e0b159\") " pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" Nov 8 00:26:12.242164 kubelet[3265]: I1108 00:26:12.242116 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z8kg\" (UniqueName: \"kubernetes.io/projected/415c1089-29e6-4262-b21f-188443e0b159-kube-api-access-9z8kg\") pod \"calico-kube-controllers-5ccd97dd97-f9lsk\" (UID: \"415c1089-29e6-4262-b21f-188443e0b159\") " pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" Nov 8 00:26:12.242210 kubelet[3265]: I1108 00:26:12.242194 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72wkc\" (UniqueName: \"kubernetes.io/projected/f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2-kube-api-access-72wkc\") pod \"calico-apiserver-847dc76596-gs868\" (UID: \"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2\") " pod="calico-apiserver/calico-apiserver-847dc76596-gs868" Nov 8 00:26:12.309573 containerd[1736]: time="2025-11-08T00:26:12.307719850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bm6fd,Uid:96be1a30-6407-4196-820d-f11b48aff85e,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:12.376824 containerd[1736]: time="2025-11-08T00:26:12.376448656Z" level=error msg="Failed to destroy network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.377471 containerd[1736]: time="2025-11-08T00:26:12.377247661Z" level=error msg="encountered an error cleaning up failed sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.377471 containerd[1736]: time="2025-11-08T00:26:12.377311362Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpv6d,Uid:88161698-8450-46cd-aabf-3650fadd565e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.378295 kubelet[3265]: E1108 00:26:12.377833 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.378295 kubelet[3265]: E1108 00:26:12.377902 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:26:12.378295 kubelet[3265]: E1108 00:26:12.377930 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpv6d" Nov 8 00:26:12.378492 kubelet[3265]: E1108 00:26:12.377985 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:12.398168 containerd[1736]: time="2025-11-08T00:26:12.397708682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhgqn,Uid:ebfac5a4-d600-45d4-a79b-901e29ad49f6,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:12.418989 containerd[1736]: time="2025-11-08T00:26:12.418943708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-p994f,Uid:c8f9b682-1403-4984-8b7e-efa798fabe9d,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:12.424243 containerd[1736]: time="2025-11-08T00:26:12.424198139Z" level=error msg="Failed to destroy network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.424527 containerd[1736]: time="2025-11-08T00:26:12.424494741Z" level=error msg="encountered an error cleaning up failed sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.424608 containerd[1736]: time="2025-11-08T00:26:12.424557441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bm6fd,Uid:96be1a30-6407-4196-820d-f11b48aff85e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.424838 kubelet[3265]: E1108 00:26:12.424799 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.424939 kubelet[3265]: E1108 00:26:12.424864 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bm6fd" Nov 8 00:26:12.424939 kubelet[3265]: E1108 00:26:12.424893 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-bm6fd" Nov 8 00:26:12.425082 kubelet[3265]: E1108 00:26:12.424962 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-bm6fd_kube-system(96be1a30-6407-4196-820d-f11b48aff85e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-bm6fd_kube-system(96be1a30-6407-4196-820d-f11b48aff85e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bm6fd" podUID="96be1a30-6407-4196-820d-f11b48aff85e" Nov 8 00:26:12.471545 containerd[1736]: time="2025-11-08T00:26:12.470273511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c754d646-rhhsj,Uid:9296b681-c184-460a-98bd-faf9eec27bec,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:12.493997 containerd[1736]: time="2025-11-08T00:26:12.493946550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579774f8c5-5sn5r,Uid:4bf1c931-a10c-42c4-bece-79d61b489c62,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:12.508064 containerd[1736]: time="2025-11-08T00:26:12.508021233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9cc5c,Uid:5435b4b0-7a30-4d83-a845-ff6ed8ff1797,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:12.515286 containerd[1736]: time="2025-11-08T00:26:12.515026275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-gs868,Uid:f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:12.526188 containerd[1736]: time="2025-11-08T00:26:12.526133940Z" level=error msg="Failed to destroy network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.526936 containerd[1736]: time="2025-11-08T00:26:12.526845444Z" level=error msg="encountered an error cleaning up failed sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.527150 containerd[1736]: time="2025-11-08T00:26:12.527027045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhgqn,Uid:ebfac5a4-d600-45d4-a79b-901e29ad49f6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.528552 kubelet[3265]: E1108 00:26:12.527911 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.528552 kubelet[3265]: E1108 00:26:12.527987 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xhgqn" Nov 8 00:26:12.528552 kubelet[3265]: E1108 00:26:12.528015 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xhgqn" Nov 8 00:26:12.529142 kubelet[3265]: E1108 00:26:12.528079 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xhgqn_kube-system(ebfac5a4-d600-45d4-a79b-901e29ad49f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xhgqn_kube-system(ebfac5a4-d600-45d4-a79b-901e29ad49f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xhgqn" podUID="ebfac5a4-d600-45d4-a79b-901e29ad49f6" Nov 8 00:26:12.567383 containerd[1736]: time="2025-11-08T00:26:12.567250582Z" level=error msg="Failed to destroy network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.570380 containerd[1736]: time="2025-11-08T00:26:12.570088499Z" level=error msg="encountered an error cleaning up failed sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.570380 containerd[1736]: time="2025-11-08T00:26:12.570160499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-p994f,Uid:c8f9b682-1403-4984-8b7e-efa798fabe9d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.570643 kubelet[3265]: E1108 00:26:12.570416 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.570643 kubelet[3265]: E1108 00:26:12.570486 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" Nov 8 00:26:12.570643 kubelet[3265]: E1108 00:26:12.570512 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" Nov 8 00:26:12.570827 kubelet[3265]: E1108 00:26:12.570579 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:12.576794 containerd[1736]: time="2025-11-08T00:26:12.576298536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ccd97dd97-f9lsk,Uid:415c1089-29e6-4262-b21f-188443e0b159,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:12.639986 containerd[1736]: time="2025-11-08T00:26:12.639849810Z" level=error msg="Failed to destroy network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.641394 containerd[1736]: time="2025-11-08T00:26:12.641241518Z" level=error msg="encountered an error cleaning up failed sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.641394 containerd[1736]: time="2025-11-08T00:26:12.641326219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c754d646-rhhsj,Uid:9296b681-c184-460a-98bd-faf9eec27bec,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.642158 kubelet[3265]: E1108 00:26:12.642033 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.642158 kubelet[3265]: E1108 00:26:12.642114 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c754d646-rhhsj" Nov 8 00:26:12.642158 kubelet[3265]: E1108 00:26:12.642144 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c754d646-rhhsj" Nov 8 00:26:12.642332 kubelet[3265]: E1108 00:26:12.642212 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54c754d646-rhhsj_calico-system(9296b681-c184-460a-98bd-faf9eec27bec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54c754d646-rhhsj_calico-system(9296b681-c184-460a-98bd-faf9eec27bec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c754d646-rhhsj" podUID="9296b681-c184-460a-98bd-faf9eec27bec" Nov 8 00:26:12.726724 containerd[1736]: time="2025-11-08T00:26:12.724923312Z" level=error msg="Failed to destroy network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.727185 containerd[1736]: time="2025-11-08T00:26:12.727143925Z" level=error msg="encountered an error cleaning up failed sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.727355 containerd[1736]: time="2025-11-08T00:26:12.727325526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579774f8c5-5sn5r,Uid:4bf1c931-a10c-42c4-bece-79d61b489c62,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.727776 kubelet[3265]: E1108 00:26:12.727735 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.727962 kubelet[3265]: E1108 00:26:12.727937 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" Nov 8 00:26:12.728082 kubelet[3265]: E1108 00:26:12.728063 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" Nov 8 00:26:12.728226 kubelet[3265]: E1108 00:26:12.728196 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:12.743446 containerd[1736]: time="2025-11-08T00:26:12.743394420Z" level=error msg="Failed to destroy network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.744116 containerd[1736]: time="2025-11-08T00:26:12.744070124Z" level=error msg="encountered an error cleaning up failed sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.744216 containerd[1736]: time="2025-11-08T00:26:12.744152725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-gs868,Uid:f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.744886 kubelet[3265]: E1108 00:26:12.744447 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.744886 kubelet[3265]: E1108 00:26:12.744539 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" Nov 8 00:26:12.744886 kubelet[3265]: E1108 00:26:12.744571 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" Nov 8 00:26:12.745099 kubelet[3265]: E1108 00:26:12.744629 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:12.754279 containerd[1736]: time="2025-11-08T00:26:12.754232684Z" level=error msg="Failed to destroy network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.754908 containerd[1736]: time="2025-11-08T00:26:12.754855188Z" level=error msg="encountered an error cleaning up failed sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.755084 containerd[1736]: time="2025-11-08T00:26:12.755050989Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9cc5c,Uid:5435b4b0-7a30-4d83-a845-ff6ed8ff1797,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.755454 kubelet[3265]: E1108 00:26:12.755415 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.755627 kubelet[3265]: E1108 00:26:12.755604 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.755805 kubelet[3265]: E1108 00:26:12.755763 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9cc5c" Nov 8 00:26:12.756149 kubelet[3265]: E1108 00:26:12.755857 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:12.768667 containerd[1736]: time="2025-11-08T00:26:12.768618869Z" level=error msg="Failed to destroy network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.769009 containerd[1736]: time="2025-11-08T00:26:12.768967071Z" level=error msg="encountered an error cleaning up failed sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.769118 containerd[1736]: time="2025-11-08T00:26:12.769033472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ccd97dd97-f9lsk,Uid:415c1089-29e6-4262-b21f-188443e0b159,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.769747 kubelet[3265]: E1108 00:26:12.769284 3265 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:12.769747 kubelet[3265]: E1108 00:26:12.769352 3265 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" Nov 8 00:26:12.769747 kubelet[3265]: E1108 00:26:12.769374 3265 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" Nov 8 00:26:12.769928 kubelet[3265]: E1108 00:26:12.769434 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:12.833956 kubelet[3265]: I1108 00:26:12.833819 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:12.836977 containerd[1736]: time="2025-11-08T00:26:12.836916672Z" level=info msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" Nov 8 00:26:12.838669 containerd[1736]: time="2025-11-08T00:26:12.837138373Z" level=info msg="Ensure that sandbox e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b in task-service has been cleanup successfully" Nov 8 00:26:12.839251 kubelet[3265]: I1108 00:26:12.838674 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:12.839652 containerd[1736]: time="2025-11-08T00:26:12.839620188Z" level=info msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" Nov 8 00:26:12.840975 containerd[1736]: time="2025-11-08T00:26:12.840942495Z" level=info msg="Ensure that sandbox 5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475 in task-service has been cleanup successfully" Nov 8 00:26:12.845893 kubelet[3265]: I1108 00:26:12.845772 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:12.846510 containerd[1736]: time="2025-11-08T00:26:12.846470128Z" level=info msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" Nov 8 00:26:12.846745 containerd[1736]: time="2025-11-08T00:26:12.846659529Z" level=info msg="Ensure that sandbox d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e in task-service has been cleanup successfully" Nov 8 00:26:12.850870 kubelet[3265]: I1108 00:26:12.850846 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:12.851670 containerd[1736]: time="2025-11-08T00:26:12.851616358Z" level=info msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" Nov 8 00:26:12.851891 containerd[1736]: time="2025-11-08T00:26:12.851835360Z" level=info msg="Ensure that sandbox 916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b in task-service has been cleanup successfully" Nov 8 00:26:12.854902 kubelet[3265]: I1108 00:26:12.854882 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:12.856995 containerd[1736]: time="2025-11-08T00:26:12.856557887Z" level=info msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" Nov 8 00:26:12.856995 containerd[1736]: time="2025-11-08T00:26:12.856771589Z" level=info msg="Ensure that sandbox 61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169 in task-service has been cleanup successfully" Nov 8 00:26:12.863432 kubelet[3265]: I1108 00:26:12.863413 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:12.864228 containerd[1736]: time="2025-11-08T00:26:12.864193332Z" level=info msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" Nov 8 00:26:12.867416 containerd[1736]: time="2025-11-08T00:26:12.867384851Z" level=info msg="Ensure that sandbox 8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69 in task-service has been cleanup successfully" Nov 8 00:26:12.869856 kubelet[3265]: I1108 00:26:12.869336 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:12.871908 containerd[1736]: time="2025-11-08T00:26:12.871880978Z" level=info msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" Nov 8 00:26:12.873267 containerd[1736]: time="2025-11-08T00:26:12.872168479Z" level=info msg="Ensure that sandbox 99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109 in task-service has been cleanup successfully" Nov 8 00:26:12.884104 containerd[1736]: time="2025-11-08T00:26:12.881013532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:26:12.891429 kubelet[3265]: I1108 00:26:12.891209 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:12.901276 containerd[1736]: time="2025-11-08T00:26:12.900608547Z" level=info msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" Nov 8 00:26:12.906779 containerd[1736]: time="2025-11-08T00:26:12.906618782Z" level=info msg="Ensure that sandbox 3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f in task-service has been cleanup successfully" Nov 8 00:26:12.912846 kubelet[3265]: I1108 00:26:12.912806 3265 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:12.915721 containerd[1736]: time="2025-11-08T00:26:12.913969826Z" level=info msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" Nov 8 00:26:12.915721 containerd[1736]: time="2025-11-08T00:26:12.914430928Z" level=info msg="Ensure that sandbox b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba in task-service has been cleanup successfully" Nov 8 00:26:13.032399 containerd[1736]: time="2025-11-08T00:26:13.032337023Z" level=error msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" failed" error="failed to destroy network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.032908 kubelet[3265]: E1108 00:26:13.032858 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:13.033552 kubelet[3265]: E1108 00:26:13.033387 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b"} Nov 8 00:26:13.033552 kubelet[3265]: E1108 00:26:13.033479 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"415c1089-29e6-4262-b21f-188443e0b159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.033552 kubelet[3265]: E1108 00:26:13.033514 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"415c1089-29e6-4262-b21f-188443e0b159\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:13.033976 containerd[1736]: time="2025-11-08T00:26:13.033811532Z" level=error msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" failed" error="failed to destroy network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.034034 kubelet[3265]: E1108 00:26:13.033993 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:13.034091 kubelet[3265]: E1108 00:26:13.034035 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475"} Nov 8 00:26:13.034091 kubelet[3265]: E1108 00:26:13.034081 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4bf1c931-a10c-42c4-bece-79d61b489c62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.034203 kubelet[3265]: E1108 00:26:13.034112 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4bf1c931-a10c-42c4-bece-79d61b489c62\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:13.071371 containerd[1736]: time="2025-11-08T00:26:13.070877651Z" level=error msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" failed" error="failed to destroy network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.072041 kubelet[3265]: E1108 00:26:13.071828 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:13.072041 kubelet[3265]: E1108 00:26:13.071907 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169"} Nov 8 00:26:13.072041 kubelet[3265]: E1108 00:26:13.071965 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8f9b682-1403-4984-8b7e-efa798fabe9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.072041 kubelet[3265]: E1108 00:26:13.072000 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8f9b682-1403-4984-8b7e-efa798fabe9d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:13.075020 containerd[1736]: time="2025-11-08T00:26:13.074911474Z" level=error msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" failed" error="failed to destroy network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.075217 kubelet[3265]: E1108 00:26:13.075135 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:13.075217 kubelet[3265]: E1108 00:26:13.075188 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e"} Nov 8 00:26:13.075337 kubelet[3265]: E1108 00:26:13.075226 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9296b681-c184-460a-98bd-faf9eec27bec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.075337 kubelet[3265]: E1108 00:26:13.075259 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9296b681-c184-460a-98bd-faf9eec27bec\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c754d646-rhhsj" podUID="9296b681-c184-460a-98bd-faf9eec27bec" Nov 8 00:26:13.076120 containerd[1736]: time="2025-11-08T00:26:13.075875280Z" level=error msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" failed" error="failed to destroy network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.076224 kubelet[3265]: E1108 00:26:13.076084 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:13.076224 kubelet[3265]: E1108 00:26:13.076148 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b"} Nov 8 00:26:13.076224 kubelet[3265]: E1108 00:26:13.076197 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.076387 kubelet[3265]: E1108 00:26:13.076227 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5435b4b0-7a30-4d83-a845-ff6ed8ff1797\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:13.085605 containerd[1736]: time="2025-11-08T00:26:13.085227535Z" level=error msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" failed" error="failed to destroy network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.086824 kubelet[3265]: E1108 00:26:13.085430 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:13.086824 kubelet[3265]: E1108 00:26:13.085481 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109"} Nov 8 00:26:13.086824 kubelet[3265]: E1108 00:26:13.085521 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88161698-8450-46cd-aabf-3650fadd565e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.086824 kubelet[3265]: E1108 00:26:13.085550 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88161698-8450-46cd-aabf-3650fadd565e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:13.088587 containerd[1736]: time="2025-11-08T00:26:13.088490854Z" level=error msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" failed" error="failed to destroy network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.089001 kubelet[3265]: E1108 00:26:13.088807 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:13.089001 kubelet[3265]: E1108 00:26:13.088879 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69"} Nov 8 00:26:13.089001 kubelet[3265]: E1108 00:26:13.088915 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebfac5a4-d600-45d4-a79b-901e29ad49f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.089001 kubelet[3265]: E1108 00:26:13.088960 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebfac5a4-d600-45d4-a79b-901e29ad49f6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xhgqn" podUID="ebfac5a4-d600-45d4-a79b-901e29ad49f6" Nov 8 00:26:13.090097 containerd[1736]: time="2025-11-08T00:26:13.090064964Z" level=error msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" failed" error="failed to destroy network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.090278 kubelet[3265]: E1108 00:26:13.090225 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:13.090278 kubelet[3265]: E1108 00:26:13.090265 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f"} Nov 8 00:26:13.090398 kubelet[3265]: E1108 00:26:13.090297 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.090398 kubelet[3265]: E1108 00:26:13.090324 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:13.097605 containerd[1736]: time="2025-11-08T00:26:13.097564508Z" level=error msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" failed" error="failed to destroy network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:13.097810 kubelet[3265]: E1108 00:26:13.097762 3265 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:13.097879 kubelet[3265]: E1108 00:26:13.097811 3265 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba"} Nov 8 00:26:13.097879 kubelet[3265]: E1108 00:26:13.097848 3265 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"96be1a30-6407-4196-820d-f11b48aff85e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:13.097987 kubelet[3265]: E1108 00:26:13.097873 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"96be1a30-6407-4196-820d-f11b48aff85e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-bm6fd" podUID="96be1a30-6407-4196-820d-f11b48aff85e" Nov 8 00:26:13.237677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109-shm.mount: Deactivated successfully. Nov 8 00:26:18.952996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106414322.mount: Deactivated successfully. Nov 8 00:26:18.989510 containerd[1736]: time="2025-11-08T00:26:18.989454733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.991688 containerd[1736]: time="2025-11-08T00:26:18.991633146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:26:18.994419 containerd[1736]: time="2025-11-08T00:26:18.994364962Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.998366 containerd[1736]: time="2025-11-08T00:26:18.998312785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:18.999031 containerd[1736]: time="2025-11-08T00:26:18.998870288Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.115260741s" Nov 8 00:26:18.999031 containerd[1736]: time="2025-11-08T00:26:18.998910388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:26:19.023999 containerd[1736]: time="2025-11-08T00:26:19.023949536Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:26:19.058095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531885770.mount: Deactivated successfully. Nov 8 00:26:19.070026 containerd[1736]: time="2025-11-08T00:26:19.069978707Z" level=info msg="CreateContainer within sandbox \"b2e460c851bc87fff0cf3a668cfea480f6c480de3d10fe87ee3b4a3be85eb75f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e774a8c960ef4759a9a8ac77518bb866a2eedb285cd8012440eabfdf2d6e79f5\"" Nov 8 00:26:19.070998 containerd[1736]: time="2025-11-08T00:26:19.070738812Z" level=info msg="StartContainer for \"e774a8c960ef4759a9a8ac77518bb866a2eedb285cd8012440eabfdf2d6e79f5\"" Nov 8 00:26:19.104855 systemd[1]: Started cri-containerd-e774a8c960ef4759a9a8ac77518bb866a2eedb285cd8012440eabfdf2d6e79f5.scope - libcontainer container e774a8c960ef4759a9a8ac77518bb866a2eedb285cd8012440eabfdf2d6e79f5. Nov 8 00:26:19.137023 containerd[1736]: time="2025-11-08T00:26:19.136865801Z" level=info msg="StartContainer for \"e774a8c960ef4759a9a8ac77518bb866a2eedb285cd8012440eabfdf2d6e79f5\" returns successfully" Nov 8 00:26:19.407853 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:26:19.408006 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:26:19.541066 containerd[1736]: time="2025-11-08T00:26:19.540915383Z" level=info msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.620 [INFO][4497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.621 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" iface="eth0" netns="/var/run/netns/cni-893ee6b3-2682-ee79-a4fc-009a3f4d365a" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.621 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" iface="eth0" netns="/var/run/netns/cni-893ee6b3-2682-ee79-a4fc-009a3f4d365a" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.623 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" iface="eth0" netns="/var/run/netns/cni-893ee6b3-2682-ee79-a4fc-009a3f4d365a" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.623 [INFO][4497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.623 [INFO][4497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.657 [INFO][4504] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.658 [INFO][4504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.658 [INFO][4504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.666 [WARNING][4504] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.666 [INFO][4504] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.668 [INFO][4504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:19.673402 containerd[1736]: 2025-11-08 00:26:19.671 [INFO][4497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:19.674253 containerd[1736]: time="2025-11-08T00:26:19.673478664Z" level=info msg="TearDown network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" successfully" Nov 8 00:26:19.674253 containerd[1736]: time="2025-11-08T00:26:19.673512564Z" level=info msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" returns successfully" Nov 8 00:26:19.697789 kubelet[3265]: I1108 00:26:19.697750 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296b681-c184-460a-98bd-faf9eec27bec-whisker-ca-bundle\") pod \"9296b681-c184-460a-98bd-faf9eec27bec\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " Nov 8 00:26:19.698513 kubelet[3265]: I1108 00:26:19.697812 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5sct\" (UniqueName: \"kubernetes.io/projected/9296b681-c184-460a-98bd-faf9eec27bec-kube-api-access-r5sct\") pod \"9296b681-c184-460a-98bd-faf9eec27bec\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " Nov 8 00:26:19.698513 kubelet[3265]: I1108 00:26:19.697843 3265 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9296b681-c184-460a-98bd-faf9eec27bec-whisker-backend-key-pair\") pod \"9296b681-c184-460a-98bd-faf9eec27bec\" (UID: \"9296b681-c184-460a-98bd-faf9eec27bec\") " Nov 8 00:26:19.699809 kubelet[3265]: I1108 00:26:19.699760 3265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9296b681-c184-460a-98bd-faf9eec27bec-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9296b681-c184-460a-98bd-faf9eec27bec" (UID: "9296b681-c184-460a-98bd-faf9eec27bec"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:26:19.704838 kubelet[3265]: I1108 00:26:19.703702 3265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9296b681-c184-460a-98bd-faf9eec27bec-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9296b681-c184-460a-98bd-faf9eec27bec" (UID: "9296b681-c184-460a-98bd-faf9eec27bec"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:26:19.705955 kubelet[3265]: I1108 00:26:19.705926 3265 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9296b681-c184-460a-98bd-faf9eec27bec-kube-api-access-r5sct" (OuterVolumeSpecName: "kube-api-access-r5sct") pod "9296b681-c184-460a-98bd-faf9eec27bec" (UID: "9296b681-c184-460a-98bd-faf9eec27bec"). InnerVolumeSpecName "kube-api-access-r5sct". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:19.725663 systemd[1]: Removed slice kubepods-besteffort-pod9296b681_c184_460a_98bd_faf9eec27bec.slice - libcontainer container kubepods-besteffort-pod9296b681_c184_460a_98bd_faf9eec27bec.slice. Nov 8 00:26:19.799094 kubelet[3265]: I1108 00:26:19.799046 3265 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r5sct\" (UniqueName: \"kubernetes.io/projected/9296b681-c184-460a-98bd-faf9eec27bec-kube-api-access-r5sct\") on node \"ci-4081.3.6-n-036966ce4d\" DevicePath \"\"" Nov 8 00:26:19.799094 kubelet[3265]: I1108 00:26:19.799088 3265 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9296b681-c184-460a-98bd-faf9eec27bec-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-036966ce4d\" DevicePath \"\"" Nov 8 00:26:19.799094 kubelet[3265]: I1108 00:26:19.799101 3265 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9296b681-c184-460a-98bd-faf9eec27bec-whisker-ca-bundle\") on node \"ci-4081.3.6-n-036966ce4d\" DevicePath \"\"" Nov 8 00:26:19.956236 systemd[1]: run-netns-cni\x2d893ee6b3\x2d2682\x2dee79\x2da4fc\x2d009a3f4d365a.mount: Deactivated successfully. Nov 8 00:26:19.956366 systemd[1]: var-lib-kubelet-pods-9296b681\x2dc184\x2d460a\x2d98bd\x2dfaf9eec27bec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5sct.mount: Deactivated successfully. Nov 8 00:26:19.956459 systemd[1]: var-lib-kubelet-pods-9296b681\x2dc184\x2d460a\x2d98bd\x2dfaf9eec27bec-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:26:19.975600 kubelet[3265]: I1108 00:26:19.975140 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-97vqn" podStartSLOduration=1.693376345 podStartE2EDuration="21.975099242s" podCreationTimestamp="2025-11-08 00:25:58 +0000 UTC" firstStartedPulling="2025-11-08 00:25:58.718110397 +0000 UTC m=+23.142152039" lastFinishedPulling="2025-11-08 00:26:18.999833194 +0000 UTC m=+43.423874936" observedRunningTime="2025-11-08 00:26:19.971966523 +0000 UTC m=+44.396008265" watchObservedRunningTime="2025-11-08 00:26:19.975099242 +0000 UTC m=+44.399140984" Nov 8 00:26:20.109473 systemd[1]: Created slice kubepods-besteffort-podeb4448af_cf19_4ce5_bbad_22d98ef7ab44.slice - libcontainer container kubepods-besteffort-podeb4448af_cf19_4ce5_bbad_22d98ef7ab44.slice. Nov 8 00:26:20.200932 kubelet[3265]: I1108 00:26:20.200888 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtpmb\" (UniqueName: \"kubernetes.io/projected/eb4448af-cf19-4ce5-bbad-22d98ef7ab44-kube-api-access-xtpmb\") pod \"whisker-7b877f7c4d-czsrv\" (UID: \"eb4448af-cf19-4ce5-bbad-22d98ef7ab44\") " pod="calico-system/whisker-7b877f7c4d-czsrv" Nov 8 00:26:20.200932 kubelet[3265]: I1108 00:26:20.200942 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/eb4448af-cf19-4ce5-bbad-22d98ef7ab44-whisker-backend-key-pair\") pod \"whisker-7b877f7c4d-czsrv\" (UID: \"eb4448af-cf19-4ce5-bbad-22d98ef7ab44\") " pod="calico-system/whisker-7b877f7c4d-czsrv" Nov 8 00:26:20.201143 kubelet[3265]: I1108 00:26:20.200977 3265 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb4448af-cf19-4ce5-bbad-22d98ef7ab44-whisker-ca-bundle\") pod \"whisker-7b877f7c4d-czsrv\" (UID: \"eb4448af-cf19-4ce5-bbad-22d98ef7ab44\") " pod="calico-system/whisker-7b877f7c4d-czsrv" Nov 8 00:26:20.416863 containerd[1736]: time="2025-11-08T00:26:20.416812045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b877f7c4d-czsrv,Uid:eb4448af-cf19-4ce5-bbad-22d98ef7ab44,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:20.591001 systemd-networkd[1354]: cali1f67308bfd5: Link UP Nov 8 00:26:20.591252 systemd-networkd[1354]: cali1f67308bfd5: Gained carrier Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.484 [INFO][4527] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.496 [INFO][4527] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0 whisker-7b877f7c4d- calico-system eb4448af-cf19-4ce5-bbad-22d98ef7ab44 934 0 2025-11-08 00:26:20 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b877f7c4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d whisker-7b877f7c4d-czsrv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1f67308bfd5 [] [] }} ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.496 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.531 [INFO][4538] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" HandleID="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.532 [INFO][4538] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" HandleID="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d58f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"whisker-7b877f7c4d-czsrv", "timestamp":"2025-11-08 00:26:20.531933822 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.532 [INFO][4538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.532 [INFO][4538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.532 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.538 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.542 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.546 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.547 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.549 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.549 [INFO][4538] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.550 [INFO][4538] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.555 [INFO][4538] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.560 [INFO][4538] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.1/26] block=192.168.80.0/26 handle="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.561 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.1/26] handle="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.561 [INFO][4538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:20.609957 containerd[1736]: 2025-11-08 00:26:20.561 [INFO][4538] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.1/26] IPv6=[] ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" HandleID="k8s-pod-network.d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.563 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0", GenerateName:"whisker-7b877f7c4d-", Namespace:"calico-system", SelfLink:"", UID:"eb4448af-cf19-4ce5-bbad-22d98ef7ab44", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b877f7c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"whisker-7b877f7c4d-czsrv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f67308bfd5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.563 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.1/32] ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.563 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f67308bfd5 ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.590 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.590 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0", GenerateName:"whisker-7b877f7c4d-", Namespace:"calico-system", SelfLink:"", UID:"eb4448af-cf19-4ce5-bbad-22d98ef7ab44", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b877f7c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe", Pod:"whisker-7b877f7c4d-czsrv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.80.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1f67308bfd5", MAC:"b2:f5:74:bd:71:b4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:20.611647 containerd[1736]: 2025-11-08 00:26:20.606 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe" Namespace="calico-system" Pod="whisker-7b877f7c4d-czsrv" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--7b877f7c4d--czsrv-eth0" Nov 8 00:26:20.631042 containerd[1736]: time="2025-11-08T00:26:20.630583502Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:20.631042 containerd[1736]: time="2025-11-08T00:26:20.630741903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:20.631042 containerd[1736]: time="2025-11-08T00:26:20.630774803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:20.631042 containerd[1736]: time="2025-11-08T00:26:20.630970104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:20.650867 systemd[1]: Started cri-containerd-d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe.scope - libcontainer container d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe. Nov 8 00:26:20.691868 containerd[1736]: time="2025-11-08T00:26:20.691362059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b877f7c4d-czsrv,Uid:eb4448af-cf19-4ce5-bbad-22d98ef7ab44,Namespace:calico-system,Attempt:0,} returns sandbox id \"d23030955248aabb58af26414a81ad779b4a63029967e39f21aa14e0349c8fbe\"" Nov 8 00:26:20.694878 containerd[1736]: time="2025-11-08T00:26:20.694820379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:20.933271 containerd[1736]: time="2025-11-08T00:26:20.933069080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:20.936786 containerd[1736]: time="2025-11-08T00:26:20.936721201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:20.936933 containerd[1736]: time="2025-11-08T00:26:20.936818202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:20.936994 kubelet[3265]: E1108 00:26:20.936956 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:20.937385 kubelet[3265]: E1108 00:26:20.937005 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:20.937467 kubelet[3265]: E1108 00:26:20.937171 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b52ac106012e441c897a14b0417ed820,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:20.939477 containerd[1736]: time="2025-11-08T00:26:20.939255416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:21.190346 containerd[1736]: time="2025-11-08T00:26:21.190160791Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:21.194343 containerd[1736]: time="2025-11-08T00:26:21.194135414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:21.194343 containerd[1736]: time="2025-11-08T00:26:21.194275915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:21.194579 kubelet[3265]: E1108 00:26:21.194521 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:21.194660 kubelet[3265]: E1108 00:26:21.194599 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:21.195336 kubelet[3265]: E1108 00:26:21.195273 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:21.196546 kubelet[3265]: E1108 00:26:21.196501 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:26:21.227761 kernel: bpftool[4717]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:26:21.618198 systemd-networkd[1354]: vxlan.calico: Link UP Nov 8 00:26:21.618209 systemd-networkd[1354]: vxlan.calico: Gained carrier Nov 8 00:26:21.722139 kubelet[3265]: I1108 00:26:21.721858 3265 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9296b681-c184-460a-98bd-faf9eec27bec" path="/var/lib/kubelet/pods/9296b681-c184-460a-98bd-faf9eec27bec/volumes" Nov 8 00:26:21.764812 systemd-networkd[1354]: cali1f67308bfd5: Gained IPv6LL Nov 8 00:26:21.939743 kubelet[3265]: E1108 00:26:21.939668 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:26:22.659876 systemd-networkd[1354]: vxlan.calico: Gained IPv6LL Nov 8 00:26:23.720569 containerd[1736]: time="2025-11-08T00:26:23.720007561Z" level=info msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" Nov 8 00:26:23.720569 containerd[1736]: time="2025-11-08T00:26:23.720008261Z" level=info msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.786 [INFO][4811] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4811] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" iface="eth0" netns="/var/run/netns/cni-6d34c04d-0d98-c0e7-4dd5-59316567561a" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4811] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" iface="eth0" netns="/var/run/netns/cni-6d34c04d-0d98-c0e7-4dd5-59316567561a" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4811] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" iface="eth0" netns="/var/run/netns/cni-6d34c04d-0d98-c0e7-4dd5-59316567561a" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.791 [INFO][4811] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.791 [INFO][4811] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.821 [INFO][4827] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.821 [INFO][4827] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.821 [INFO][4827] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.829 [WARNING][4827] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.829 [INFO][4827] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.831 [INFO][4827] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:23.841182 containerd[1736]: 2025-11-08 00:26:23.834 [INFO][4811] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:23.842335 containerd[1736]: time="2025-11-08T00:26:23.841923078Z" level=info msg="TearDown network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" successfully" Nov 8 00:26:23.842335 containerd[1736]: time="2025-11-08T00:26:23.841964378Z" level=info msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" returns successfully" Nov 8 00:26:23.844666 containerd[1736]: time="2025-11-08T00:26:23.843951990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhgqn,Uid:ebfac5a4-d600-45d4-a79b-901e29ad49f6,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:23.844966 systemd[1]: run-netns-cni\x2d6d34c04d\x2d0d98\x2dc0e7\x2d4dd5\x2d59316567561a.mount: Deactivated successfully. Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.788 [INFO][4812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.789 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" iface="eth0" netns="/var/run/netns/cni-72c83e34-c9c3-6aed-c473-f679cfefb1a8" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.789 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" iface="eth0" netns="/var/run/netns/cni-72c83e34-c9c3-6aed-c473-f679cfefb1a8" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" iface="eth0" netns="/var/run/netns/cni-72c83e34-c9c3-6aed-c473-f679cfefb1a8" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.790 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.824 [INFO][4825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.824 [INFO][4825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.831 [INFO][4825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.838 [WARNING][4825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.838 [INFO][4825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.841 [INFO][4825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:23.848636 containerd[1736]: 2025-11-08 00:26:23.847 [INFO][4812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:23.849573 containerd[1736]: time="2025-11-08T00:26:23.849392922Z" level=info msg="TearDown network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" successfully" Nov 8 00:26:23.849573 containerd[1736]: time="2025-11-08T00:26:23.849429722Z" level=info msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" returns successfully" Nov 8 00:26:23.850465 containerd[1736]: time="2025-11-08T00:26:23.850433028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ccd97dd97-f9lsk,Uid:415c1089-29e6-4262-b21f-188443e0b159,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:23.854531 systemd[1]: run-netns-cni\x2d72c83e34\x2dc9c3\x2d6aed\x2dc473\x2df679cfefb1a8.mount: Deactivated successfully. Nov 8 00:26:24.054763 systemd-networkd[1354]: calief11604cfb5: Link UP Nov 8 00:26:24.067544 systemd-networkd[1354]: calief11604cfb5: Gained carrier Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:23.957 [INFO][4838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0 coredns-674b8bbfcf- kube-system ebfac5a4-d600-45d4-a79b-901e29ad49f6 960 0 2025-11-08 00:25:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d coredns-674b8bbfcf-xhgqn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calief11604cfb5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:23.958 [INFO][4838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.000 [INFO][4862] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" HandleID="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.000 [INFO][4862] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" HandleID="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"coredns-674b8bbfcf-xhgqn", "timestamp":"2025-11-08 00:26:24.000134908 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.000 [INFO][4862] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.000 [INFO][4862] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.000 [INFO][4862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.008 [INFO][4862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.013 [INFO][4862] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.017 [INFO][4862] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.019 [INFO][4862] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.021 [INFO][4862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.021 [INFO][4862] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.022 [INFO][4862] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189 Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.029 [INFO][4862] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.042 [INFO][4862] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.2/26] block=192.168.80.0/26 handle="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.042 [INFO][4862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.2/26] handle="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.043 [INFO][4862] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:24.096801 containerd[1736]: 2025-11-08 00:26:24.043 [INFO][4862] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.2/26] IPv6=[] ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" HandleID="k8s-pod-network.528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.045 [INFO][4838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ebfac5a4-d600-45d4-a79b-901e29ad49f6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"coredns-674b8bbfcf-xhgqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief11604cfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.046 [INFO][4838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.2/32] ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.046 [INFO][4838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief11604cfb5 ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.068 [INFO][4838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.069 [INFO][4838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ebfac5a4-d600-45d4-a79b-901e29ad49f6", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189", Pod:"coredns-674b8bbfcf-xhgqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief11604cfb5", MAC:"2a:6a:51:26:f2:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:24.098895 containerd[1736]: 2025-11-08 00:26:24.091 [INFO][4838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189" Namespace="kube-system" Pod="coredns-674b8bbfcf-xhgqn" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:24.137960 containerd[1736]: time="2025-11-08T00:26:24.135650504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:24.137960 containerd[1736]: time="2025-11-08T00:26:24.136351808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:24.137960 containerd[1736]: time="2025-11-08T00:26:24.136370508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.137960 containerd[1736]: time="2025-11-08T00:26:24.136470909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.175141 systemd[1]: Started cri-containerd-528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189.scope - libcontainer container 528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189. Nov 8 00:26:24.189847 systemd-networkd[1354]: califf07c9633fd: Link UP Nov 8 00:26:24.190064 systemd-networkd[1354]: califf07c9633fd: Gained carrier Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:23.975 [INFO][4846] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0 calico-kube-controllers-5ccd97dd97- calico-system 415c1089-29e6-4262-b21f-188443e0b159 961 0 2025-11-08 00:25:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5ccd97dd97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d calico-kube-controllers-5ccd97dd97-f9lsk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] califf07c9633fd [] [] }} ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:23.975 [INFO][4846] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.014 [INFO][4868] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" HandleID="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.015 [INFO][4868] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" HandleID="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000332940), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"calico-kube-controllers-5ccd97dd97-f9lsk", "timestamp":"2025-11-08 00:26:24.014920295 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.015 [INFO][4868] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.043 [INFO][4868] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.043 [INFO][4868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.108 [INFO][4868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.118 [INFO][4868] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.125 [INFO][4868] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.131 [INFO][4868] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.136 [INFO][4868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.137 [INFO][4868] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.140 [INFO][4868] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622 Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.161 [INFO][4868] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.182 [INFO][4868] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.3/26] block=192.168.80.0/26 handle="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.182 [INFO][4868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.3/26] handle="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.182 [INFO][4868] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:24.230178 containerd[1736]: 2025-11-08 00:26:24.182 [INFO][4868] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.3/26] IPv6=[] ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" HandleID="k8s-pod-network.fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.184 [INFO][4846] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0", GenerateName:"calico-kube-controllers-5ccd97dd97-", Namespace:"calico-system", SelfLink:"", UID:"415c1089-29e6-4262-b21f-188443e0b159", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ccd97dd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"calico-kube-controllers-5ccd97dd97-f9lsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf07c9633fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.184 [INFO][4846] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.3/32] ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.184 [INFO][4846] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf07c9633fd ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.192 [INFO][4846] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.192 [INFO][4846] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0", GenerateName:"calico-kube-controllers-5ccd97dd97-", Namespace:"calico-system", SelfLink:"", UID:"415c1089-29e6-4262-b21f-188443e0b159", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ccd97dd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622", Pod:"calico-kube-controllers-5ccd97dd97-f9lsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf07c9633fd", MAC:"82:d4:68:00:2d:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:24.232407 containerd[1736]: 2025-11-08 00:26:24.220 [INFO][4846] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622" Namespace="calico-system" Pod="calico-kube-controllers-5ccd97dd97-f9lsk" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:24.274928 containerd[1736]: time="2025-11-08T00:26:24.274448720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:24.274928 containerd[1736]: time="2025-11-08T00:26:24.274507520Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:24.274928 containerd[1736]: time="2025-11-08T00:26:24.274552621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.274928 containerd[1736]: time="2025-11-08T00:26:24.274663721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:24.298527 containerd[1736]: time="2025-11-08T00:26:24.298145659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xhgqn,Uid:ebfac5a4-d600-45d4-a79b-901e29ad49f6,Namespace:kube-system,Attempt:1,} returns sandbox id \"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189\"" Nov 8 00:26:24.319215 systemd[1]: Started cri-containerd-fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622.scope - libcontainer container fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622. Nov 8 00:26:24.323955 containerd[1736]: time="2025-11-08T00:26:24.323907611Z" level=info msg="CreateContainer within sandbox \"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:24.397891 containerd[1736]: time="2025-11-08T00:26:24.397328142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5ccd97dd97-f9lsk,Uid:415c1089-29e6-4262-b21f-188443e0b159,Namespace:calico-system,Attempt:1,} returns sandbox id \"fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622\"" Nov 8 00:26:24.398497 containerd[1736]: time="2025-11-08T00:26:24.398311948Z" level=info msg="CreateContainer within sandbox \"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cae64b31a61b9f7bb3830fb325891ec3a750c4bc76286f5459d8c3d7285cb6d5\"" Nov 8 00:26:24.401029 containerd[1736]: time="2025-11-08T00:26:24.400467761Z" level=info msg="StartContainer for \"cae64b31a61b9f7bb3830fb325891ec3a750c4bc76286f5459d8c3d7285cb6d5\"" Nov 8 00:26:24.406657 containerd[1736]: time="2025-11-08T00:26:24.405526090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:24.445567 systemd[1]: Started cri-containerd-cae64b31a61b9f7bb3830fb325891ec3a750c4bc76286f5459d8c3d7285cb6d5.scope - libcontainer container cae64b31a61b9f7bb3830fb325891ec3a750c4bc76286f5459d8c3d7285cb6d5. Nov 8 00:26:24.491858 containerd[1736]: time="2025-11-08T00:26:24.491746297Z" level=info msg="StartContainer for \"cae64b31a61b9f7bb3830fb325891ec3a750c4bc76286f5459d8c3d7285cb6d5\" returns successfully" Nov 8 00:26:24.675569 containerd[1736]: time="2025-11-08T00:26:24.675477077Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:24.687377 containerd[1736]: time="2025-11-08T00:26:24.687271947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:24.687377 containerd[1736]: time="2025-11-08T00:26:24.687318147Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:24.687616 kubelet[3265]: E1108 00:26:24.687563 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:24.688343 kubelet[3265]: E1108 00:26:24.687622 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:24.688343 kubelet[3265]: E1108 00:26:24.687921 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z8kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:24.689180 kubelet[3265]: E1108 00:26:24.689104 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:24.718959 containerd[1736]: time="2025-11-08T00:26:24.718282929Z" level=info msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.787 [INFO][5021] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.788 [INFO][5021] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" iface="eth0" netns="/var/run/netns/cni-0af21e3b-4885-d578-1757-ec97004a5ec2" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.788 [INFO][5021] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" iface="eth0" netns="/var/run/netns/cni-0af21e3b-4885-d578-1757-ec97004a5ec2" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.789 [INFO][5021] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" iface="eth0" netns="/var/run/netns/cni-0af21e3b-4885-d578-1757-ec97004a5ec2" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.789 [INFO][5021] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.789 [INFO][5021] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.819 [INFO][5029] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.820 [INFO][5029] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.820 [INFO][5029] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.827 [WARNING][5029] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.827 [INFO][5029] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.829 [INFO][5029] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:24.832370 containerd[1736]: 2025-11-08 00:26:24.830 [INFO][5021] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:24.833424 containerd[1736]: time="2025-11-08T00:26:24.832534900Z" level=info msg="TearDown network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" successfully" Nov 8 00:26:24.833424 containerd[1736]: time="2025-11-08T00:26:24.832578801Z" level=info msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" returns successfully" Nov 8 00:26:24.834895 containerd[1736]: time="2025-11-08T00:26:24.834766813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpv6d,Uid:88161698-8450-46cd-aabf-3650fadd565e,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:24.850692 systemd[1]: run-netns-cni\x2d0af21e3b\x2d4885\x2dd578\x2d1757\x2dec97004a5ec2.mount: Deactivated successfully. Nov 8 00:26:24.965123 kubelet[3265]: E1108 00:26:24.964956 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:25.019537 kubelet[3265]: I1108 00:26:25.019460 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xhgqn" podStartSLOduration=43.019436799 podStartE2EDuration="43.019436799s" podCreationTimestamp="2025-11-08 00:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:24.983565588 +0000 UTC m=+49.407607230" watchObservedRunningTime="2025-11-08 00:26:25.019436799 +0000 UTC m=+49.443478441" Nov 8 00:26:25.056821 systemd-networkd[1354]: calic4131ce8790: Link UP Nov 8 00:26:25.057093 systemd-networkd[1354]: calic4131ce8790: Gained carrier Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.920 [INFO][5035] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0 csi-node-driver- calico-system 88161698-8450-46cd-aabf-3650fadd565e 979 0 2025-11-08 00:25:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d csi-node-driver-wpv6d eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic4131ce8790 [] [] }} ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.920 [INFO][5035] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.952 [INFO][5048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" HandleID="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.952 [INFO][5048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" HandleID="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"csi-node-driver-wpv6d", "timestamp":"2025-11-08 00:26:24.952242804 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.952 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.952 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.952 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.964 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.974 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:24.992 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.000 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.008 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.008 [INFO][5048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.012 [INFO][5048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77 Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.026 [INFO][5048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.045 [INFO][5048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.4/26] block=192.168.80.0/26 handle="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.045 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.4/26] handle="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.045 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:25.087797 containerd[1736]: 2025-11-08 00:26:25.045 [INFO][5048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.4/26] IPv6=[] ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" HandleID="k8s-pod-network.eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.052 [INFO][5035] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88161698-8450-46cd-aabf-3650fadd565e", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"csi-node-driver-wpv6d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4131ce8790", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.052 [INFO][5035] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.4/32] ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.053 [INFO][5035] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4131ce8790 ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.056 [INFO][5035] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.056 [INFO][5035] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88161698-8450-46cd-aabf-3650fadd565e", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77", Pod:"csi-node-driver-wpv6d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4131ce8790", MAC:"56:b0:9d:c3:3d:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:25.090680 containerd[1736]: 2025-11-08 00:26:25.084 [INFO][5035] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77" Namespace="calico-system" Pod="csi-node-driver-wpv6d" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:25.120787 containerd[1736]: time="2025-11-08T00:26:25.120510993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:25.120787 containerd[1736]: time="2025-11-08T00:26:25.120592794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:25.120787 containerd[1736]: time="2025-11-08T00:26:25.120612694Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:25.121659 containerd[1736]: time="2025-11-08T00:26:25.121336598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:25.158889 systemd[1]: Started cri-containerd-eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77.scope - libcontainer container eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77. Nov 8 00:26:25.184254 containerd[1736]: time="2025-11-08T00:26:25.184165267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpv6d,Uid:88161698-8450-46cd-aabf-3650fadd565e,Namespace:calico-system,Attempt:1,} returns sandbox id \"eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77\"" Nov 8 00:26:25.187311 containerd[1736]: time="2025-11-08T00:26:25.187097184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:26:25.219977 systemd-networkd[1354]: calief11604cfb5: Gained IPv6LL Nov 8 00:26:25.429126 containerd[1736]: time="2025-11-08T00:26:25.429076807Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:25.431777 containerd[1736]: time="2025-11-08T00:26:25.431664022Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:26:25.431777 containerd[1736]: time="2025-11-08T00:26:25.431726122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:26:25.432035 kubelet[3265]: E1108 00:26:25.431940 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:25.432035 kubelet[3265]: E1108 00:26:25.432000 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:25.432217 kubelet[3265]: E1108 00:26:25.432154 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:25.434943 containerd[1736]: time="2025-11-08T00:26:25.434912141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:26:25.683845 containerd[1736]: time="2025-11-08T00:26:25.683773704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:25.687680 containerd[1736]: time="2025-11-08T00:26:25.687623726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:26:25.687818 containerd[1736]: time="2025-11-08T00:26:25.687735127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:26:25.687983 kubelet[3265]: E1108 00:26:25.687939 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:25.688387 kubelet[3265]: E1108 00:26:25.687999 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:25.688387 kubelet[3265]: E1108 00:26:25.688167 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:25.689995 kubelet[3265]: E1108 00:26:25.689393 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:25.721184 containerd[1736]: time="2025-11-08T00:26:25.721119823Z" level=info msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" Nov 8 00:26:25.732464 systemd-networkd[1354]: califf07c9633fd: Gained IPv6LL Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.773 [INFO][5117] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.773 [INFO][5117] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" iface="eth0" netns="/var/run/netns/cni-0ecea72c-55a5-9caa-b9eb-610accd90cf9" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.773 [INFO][5117] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" iface="eth0" netns="/var/run/netns/cni-0ecea72c-55a5-9caa-b9eb-610accd90cf9" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.774 [INFO][5117] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" iface="eth0" netns="/var/run/netns/cni-0ecea72c-55a5-9caa-b9eb-610accd90cf9" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.774 [INFO][5117] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.774 [INFO][5117] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.800 [INFO][5125] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.800 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.800 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.806 [WARNING][5125] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.806 [INFO][5125] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.808 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:25.811374 containerd[1736]: 2025-11-08 00:26:25.810 [INFO][5117] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:25.812271 containerd[1736]: time="2025-11-08T00:26:25.811666856Z" level=info msg="TearDown network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" successfully" Nov 8 00:26:25.812271 containerd[1736]: time="2025-11-08T00:26:25.811736356Z" level=info msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" returns successfully" Nov 8 00:26:25.813257 containerd[1736]: time="2025-11-08T00:26:25.812797962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579774f8c5-5sn5r,Uid:4bf1c931-a10c-42c4-bece-79d61b489c62,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:25.847832 systemd[1]: run-netns-cni\x2d0ecea72c\x2d55a5\x2d9caa\x2db9eb\x2d610accd90cf9.mount: Deactivated successfully. Nov 8 00:26:25.970335 kubelet[3265]: E1108 00:26:25.970050 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:25.970335 kubelet[3265]: E1108 00:26:25.970149 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:25.986027 systemd-networkd[1354]: cali77a93201f5c: Link UP Nov 8 00:26:25.987328 systemd-networkd[1354]: cali77a93201f5c: Gained carrier Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.905 [INFO][5132] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0 calico-apiserver-579774f8c5- calico-apiserver 4bf1c931-a10c-42c4-bece-79d61b489c62 1004 0 2025-11-08 00:25:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:579774f8c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d calico-apiserver-579774f8c5-5sn5r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77a93201f5c [] [] }} ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.906 [INFO][5132] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.929 [INFO][5145] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" HandleID="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.929 [INFO][5145] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" HandleID="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-036966ce4d", "pod":"calico-apiserver-579774f8c5-5sn5r", "timestamp":"2025-11-08 00:26:25.92978425 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.930 [INFO][5145] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.930 [INFO][5145] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.930 [INFO][5145] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.937 [INFO][5145] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.941 [INFO][5145] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.945 [INFO][5145] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.946 [INFO][5145] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.950 [INFO][5145] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.950 [INFO][5145] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.952 [INFO][5145] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.961 [INFO][5145] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.978 [INFO][5145] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.5/26] block=192.168.80.0/26 handle="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.978 [INFO][5145] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.5/26] handle="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.978 [INFO][5145] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:26.010604 containerd[1736]: 2025-11-08 00:26:25.978 [INFO][5145] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.5/26] IPv6=[] ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" HandleID="k8s-pod-network.782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:25.981 [INFO][5132] cni-plugin/k8s.go 418: Populated endpoint ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0", GenerateName:"calico-apiserver-579774f8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bf1c931-a10c-42c4-bece-79d61b489c62", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579774f8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"calico-apiserver-579774f8c5-5sn5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a93201f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:25.981 [INFO][5132] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.5/32] ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:25.981 [INFO][5132] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77a93201f5c ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:25.987 [INFO][5132] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:25.987 [INFO][5132] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0", GenerateName:"calico-apiserver-579774f8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bf1c931-a10c-42c4-bece-79d61b489c62", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579774f8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce", Pod:"calico-apiserver-579774f8c5-5sn5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a93201f5c", MAC:"da:7e:3e:a9:9a:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:26.013308 containerd[1736]: 2025-11-08 00:26:26.008 [INFO][5132] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce" Namespace="calico-apiserver" Pod="calico-apiserver-579774f8c5-5sn5r" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:26.049074 containerd[1736]: time="2025-11-08T00:26:26.048843750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:26.049338 containerd[1736]: time="2025-11-08T00:26:26.048918950Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:26.049338 containerd[1736]: time="2025-11-08T00:26:26.048952650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:26.049338 containerd[1736]: time="2025-11-08T00:26:26.049033851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:26.079849 systemd[1]: Started cri-containerd-782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce.scope - libcontainer container 782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce. Nov 8 00:26:26.125959 containerd[1736]: time="2025-11-08T00:26:26.125910903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-579774f8c5-5sn5r,Uid:4bf1c931-a10c-42c4-bece-79d61b489c62,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce\"" Nov 8 00:26:26.129312 containerd[1736]: time="2025-11-08T00:26:26.129278522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:26.243878 systemd-networkd[1354]: calic4131ce8790: Gained IPv6LL Nov 8 00:26:26.396290 containerd[1736]: time="2025-11-08T00:26:26.396242692Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:26.399474 containerd[1736]: time="2025-11-08T00:26:26.399423010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:26.399662 containerd[1736]: time="2025-11-08T00:26:26.399523311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:26.399842 kubelet[3265]: E1108 00:26:26.399798 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:26.399934 kubelet[3265]: E1108 00:26:26.399856 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:26.400127 kubelet[3265]: E1108 00:26:26.400043 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kgh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:26.401626 kubelet[3265]: E1108 00:26:26.401514 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:26.718330 containerd[1736]: time="2025-11-08T00:26:26.717828782Z" level=info msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.765 [INFO][5211] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.765 [INFO][5211] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" iface="eth0" netns="/var/run/netns/cni-22eb79c7-301b-260b-c6ea-119fa2da9bf5" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.765 [INFO][5211] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" iface="eth0" netns="/var/run/netns/cni-22eb79c7-301b-260b-c6ea-119fa2da9bf5" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.766 [INFO][5211] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" iface="eth0" netns="/var/run/netns/cni-22eb79c7-301b-260b-c6ea-119fa2da9bf5" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.766 [INFO][5211] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.766 [INFO][5211] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.789 [INFO][5218] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.790 [INFO][5218] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.790 [INFO][5218] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.797 [WARNING][5218] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.797 [INFO][5218] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.798 [INFO][5218] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:26.801113 containerd[1736]: 2025-11-08 00:26:26.799 [INFO][5211] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:26.803834 containerd[1736]: time="2025-11-08T00:26:26.803787287Z" level=info msg="TearDown network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" successfully" Nov 8 00:26:26.803834 containerd[1736]: time="2025-11-08T00:26:26.803825587Z" level=info msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" returns successfully" Nov 8 00:26:26.804555 containerd[1736]: time="2025-11-08T00:26:26.804525291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-gs868,Uid:f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:26.805562 systemd[1]: run-netns-cni\x2d22eb79c7\x2d301b\x2d260b\x2dc6ea\x2d119fa2da9bf5.mount: Deactivated successfully. Nov 8 00:26:26.957863 systemd-networkd[1354]: cali3392f7e9cae: Link UP Nov 8 00:26:26.958382 systemd-networkd[1354]: cali3392f7e9cae: Gained carrier Nov 8 00:26:26.976316 kubelet[3265]: E1108 00:26:26.976198 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:26.977640 kubelet[3265]: E1108 00:26:26.977583 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.875 [INFO][5225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0 calico-apiserver-847dc76596- calico-apiserver f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2 1022 0 2025-11-08 00:25:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847dc76596 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d calico-apiserver-847dc76596-gs868 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3392f7e9cae [] [] }} ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.876 [INFO][5225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.903 [INFO][5237] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" HandleID="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.903 [INFO][5237] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" HandleID="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-036966ce4d", "pod":"calico-apiserver-847dc76596-gs868", "timestamp":"2025-11-08 00:26:26.903308172 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.903 [INFO][5237] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.903 [INFO][5237] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.903 [INFO][5237] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.919 [INFO][5237] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.923 [INFO][5237] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.928 [INFO][5237] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.930 [INFO][5237] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.934 [INFO][5237] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.934 [INFO][5237] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.937 [INFO][5237] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2 Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.942 [INFO][5237] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.951 [INFO][5237] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.6/26] block=192.168.80.0/26 handle="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.951 [INFO][5237] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.6/26] handle="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.951 [INFO][5237] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:26.984969 containerd[1736]: 2025-11-08 00:26:26.951 [INFO][5237] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.6/26] IPv6=[] ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" HandleID="k8s-pod-network.84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.953 [INFO][5225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"calico-apiserver-847dc76596-gs868", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3392f7e9cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.953 [INFO][5225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.6/32] ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.953 [INFO][5225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3392f7e9cae ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.959 [INFO][5225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.959 [INFO][5225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2", Pod:"calico-apiserver-847dc76596-gs868", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3392f7e9cae", MAC:"7a:13:92:25:1d:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:26.985876 containerd[1736]: 2025-11-08 00:26:26.982 [INFO][5225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-gs868" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:27.032753 containerd[1736]: time="2025-11-08T00:26:27.030890822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:27.032753 containerd[1736]: time="2025-11-08T00:26:27.031213624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:27.032753 containerd[1736]: time="2025-11-08T00:26:27.031236224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:27.032753 containerd[1736]: time="2025-11-08T00:26:27.031457625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:27.065930 systemd[1]: Started cri-containerd-84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2.scope - libcontainer container 84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2. Nov 8 00:26:27.117548 containerd[1736]: time="2025-11-08T00:26:27.117500331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-gs868,Uid:f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2\"" Nov 8 00:26:27.119075 containerd[1736]: time="2025-11-08T00:26:27.119046740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:27.358007 containerd[1736]: time="2025-11-08T00:26:27.357844544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:27.360814 containerd[1736]: time="2025-11-08T00:26:27.360751261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:27.361044 containerd[1736]: time="2025-11-08T00:26:27.360856661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:27.361104 kubelet[3265]: E1108 00:26:27.361024 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:27.361171 kubelet[3265]: E1108 00:26:27.361096 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:27.361330 kubelet[3265]: E1108 00:26:27.361282 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:27.362805 kubelet[3265]: E1108 00:26:27.362738 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:27.719668 containerd[1736]: time="2025-11-08T00:26:27.719293668Z" level=info msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" Nov 8 00:26:27.725163 containerd[1736]: time="2025-11-08T00:26:27.725126103Z" level=info msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" Nov 8 00:26:27.845220 systemd-networkd[1354]: cali77a93201f5c: Gained IPv6LL Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.803 [INFO][5310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.805 [INFO][5310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" iface="eth0" netns="/var/run/netns/cni-57c5558f-5ff0-3bc5-5be6-df852e423b55" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.805 [INFO][5310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" iface="eth0" netns="/var/run/netns/cni-57c5558f-5ff0-3bc5-5be6-df852e423b55" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.805 [INFO][5310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" iface="eth0" netns="/var/run/netns/cni-57c5558f-5ff0-3bc5-5be6-df852e423b55" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.805 [INFO][5310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.806 [INFO][5310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.846 [INFO][5331] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.847 [INFO][5331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.847 [INFO][5331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.858 [WARNING][5331] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.858 [INFO][5331] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.860 [INFO][5331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:27.865738 containerd[1736]: 2025-11-08 00:26:27.863 [INFO][5310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:27.869921 containerd[1736]: time="2025-11-08T00:26:27.869852053Z" level=info msg="TearDown network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" successfully" Nov 8 00:26:27.869921 containerd[1736]: time="2025-11-08T00:26:27.869914054Z" level=info msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" returns successfully" Nov 8 00:26:27.870679 containerd[1736]: time="2025-11-08T00:26:27.870648458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bm6fd,Uid:96be1a30-6407-4196-820d-f11b48aff85e,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:27.872518 systemd[1]: run-netns-cni\x2d57c5558f\x2d5ff0\x2d3bc5\x2d5be6\x2ddf852e423b55.mount: Deactivated successfully. Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.807 [INFO][5321] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.808 [INFO][5321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" iface="eth0" netns="/var/run/netns/cni-28e8546a-fed8-2b6a-fb8b-69480f2d34bd" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.808 [INFO][5321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" iface="eth0" netns="/var/run/netns/cni-28e8546a-fed8-2b6a-fb8b-69480f2d34bd" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.808 [INFO][5321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" iface="eth0" netns="/var/run/netns/cni-28e8546a-fed8-2b6a-fb8b-69480f2d34bd" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.808 [INFO][5321] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.808 [INFO][5321] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.858 [INFO][5333] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.859 [INFO][5333] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.860 [INFO][5333] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.876 [WARNING][5333] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.877 [INFO][5333] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.878 [INFO][5333] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:27.881548 containerd[1736]: 2025-11-08 00:26:27.879 [INFO][5321] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:27.883956 containerd[1736]: time="2025-11-08T00:26:27.883818535Z" level=info msg="TearDown network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" successfully" Nov 8 00:26:27.883956 containerd[1736]: time="2025-11-08T00:26:27.883855236Z" level=info msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" returns successfully" Nov 8 00:26:27.886369 containerd[1736]: time="2025-11-08T00:26:27.886112549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9cc5c,Uid:5435b4b0-7a30-4d83-a845-ff6ed8ff1797,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:27.886284 systemd[1]: run-netns-cni\x2d28e8546a\x2dfed8\x2d2b6a\x2dfb8b\x2d69480f2d34bd.mount: Deactivated successfully. Nov 8 00:26:27.989782 kubelet[3265]: E1108 00:26:27.987777 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:27.993034 kubelet[3265]: E1108 00:26:27.990137 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:28.029039 kubelet[3265]: I1108 00:26:28.029001 3265 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:28.228956 systemd-networkd[1354]: calice000dd20c7: Link UP Nov 8 00:26:28.231755 systemd-networkd[1354]: calice000dd20c7: Gained carrier Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:27.987 [INFO][5346] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0 coredns-674b8bbfcf- kube-system 96be1a30-6407-4196-820d-f11b48aff85e 1043 0 2025-11-08 00:25:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d coredns-674b8bbfcf-bm6fd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calice000dd20c7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:27.987 [INFO][5346] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.112 [INFO][5372] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" HandleID="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.112 [INFO][5372] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" HandleID="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000358fa0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"coredns-674b8bbfcf-bm6fd", "timestamp":"2025-11-08 00:26:28.111996577 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.112 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.112 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.112 [INFO][5372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.133 [INFO][5372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.164 [INFO][5372] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.172 [INFO][5372] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.177 [INFO][5372] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.181 [INFO][5372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.181 [INFO][5372] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.185 [INFO][5372] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492 Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.203 [INFO][5372] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5372] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.7/26] block=192.168.80.0/26 handle="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.7/26] handle="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:28.264216 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5372] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.7/26] IPv6=[] ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" HandleID="k8s-pod-network.bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.223 [INFO][5346] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96be1a30-6407-4196-820d-f11b48aff85e", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"coredns-674b8bbfcf-bm6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice000dd20c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.224 [INFO][5346] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.7/32] ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.224 [INFO][5346] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calice000dd20c7 ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.234 [INFO][5346] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.235 [INFO][5346] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96be1a30-6407-4196-820d-f11b48aff85e", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492", Pod:"coredns-674b8bbfcf-bm6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice000dd20c7", MAC:"a6:8c:fc:b7:b7:7f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:28.265366 containerd[1736]: 2025-11-08 00:26:28.261 [INFO][5346] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492" Namespace="kube-system" Pod="coredns-674b8bbfcf-bm6fd" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:28.314102 containerd[1736]: time="2025-11-08T00:26:28.313734362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:28.314102 containerd[1736]: time="2025-11-08T00:26:28.313807063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:28.314102 containerd[1736]: time="2025-11-08T00:26:28.313824863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:28.314102 containerd[1736]: time="2025-11-08T00:26:28.313968464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:28.342059 systemd-networkd[1354]: cali58dde8405e5: Link UP Nov 8 00:26:28.342331 systemd-networkd[1354]: cali58dde8405e5: Gained carrier Nov 8 00:26:28.347159 systemd[1]: Started cri-containerd-bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492.scope - libcontainer container bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492. Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.032 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0 goldmane-666569f655- calico-system 5435b4b0-7a30-4d83-a845-ff6ed8ff1797 1044 0 2025-11-08 00:25:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d goldmane-666569f655-9cc5c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali58dde8405e5 [] [] }} ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.035 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.177 [INFO][5387] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" HandleID="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.179 [INFO][5387] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" HandleID="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f9460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-036966ce4d", "pod":"goldmane-666569f655-9cc5c", "timestamp":"2025-11-08 00:26:28.17728866 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.179 [INFO][5387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.219 [INFO][5387] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.255 [INFO][5387] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.275 [INFO][5387] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.284 [INFO][5387] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.287 [INFO][5387] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.291 [INFO][5387] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.291 [INFO][5387] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.295 [INFO][5387] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7 Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.303 [INFO][5387] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.324 [INFO][5387] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.8/26] block=192.168.80.0/26 handle="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.325 [INFO][5387] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.8/26] handle="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.325 [INFO][5387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:28.376792 containerd[1736]: 2025-11-08 00:26:28.325 [INFO][5387] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.8/26] IPv6=[] ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" HandleID="k8s-pod-network.195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.332 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5435b4b0-7a30-4d83-a845-ff6ed8ff1797", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"goldmane-666569f655-9cc5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali58dde8405e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.333 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.8/32] ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.333 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58dde8405e5 ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.341 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.341 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5435b4b0-7a30-4d83-a845-ff6ed8ff1797", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7", Pod:"goldmane-666569f655-9cc5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali58dde8405e5", MAC:"ee:39:4e:8e:c2:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:28.379398 containerd[1736]: 2025-11-08 00:26:28.371 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7" Namespace="calico-system" Pod="goldmane-666569f655-9cc5c" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:28.407398 containerd[1736]: time="2025-11-08T00:26:28.407080411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:28.407398 containerd[1736]: time="2025-11-08T00:26:28.407142111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:28.407398 containerd[1736]: time="2025-11-08T00:26:28.407180212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:28.407398 containerd[1736]: time="2025-11-08T00:26:28.407269612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:28.446845 containerd[1736]: time="2025-11-08T00:26:28.446777944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bm6fd,Uid:96be1a30-6407-4196-820d-f11b48aff85e,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492\"" Nov 8 00:26:28.452252 systemd[1]: Started cri-containerd-195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7.scope - libcontainer container 195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7. Nov 8 00:26:28.459548 containerd[1736]: time="2025-11-08T00:26:28.459442719Z" level=info msg="CreateContainer within sandbox \"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:28.495860 containerd[1736]: time="2025-11-08T00:26:28.495817632Z" level=info msg="CreateContainer within sandbox \"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"73cfe2f2ef885edc7cdd3aa18596b3b58cb381650bb213c43d94c74b6b76d054\"" Nov 8 00:26:28.496762 containerd[1736]: time="2025-11-08T00:26:28.496727137Z" level=info msg="StartContainer for \"73cfe2f2ef885edc7cdd3aa18596b3b58cb381650bb213c43d94c74b6b76d054\"" Nov 8 00:26:28.538913 systemd[1]: Started cri-containerd-73cfe2f2ef885edc7cdd3aa18596b3b58cb381650bb213c43d94c74b6b76d054.scope - libcontainer container 73cfe2f2ef885edc7cdd3aa18596b3b58cb381650bb213c43d94c74b6b76d054. Nov 8 00:26:28.607919 containerd[1736]: time="2025-11-08T00:26:28.607859690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9cc5c,Uid:5435b4b0-7a30-4d83-a845-ff6ed8ff1797,Namespace:calico-system,Attempt:1,} returns sandbox id \"195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7\"" Nov 8 00:26:28.611217 containerd[1736]: time="2025-11-08T00:26:28.611126309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:28.643073 containerd[1736]: time="2025-11-08T00:26:28.643023196Z" level=info msg="StartContainer for \"73cfe2f2ef885edc7cdd3aa18596b3b58cb381650bb213c43d94c74b6b76d054\" returns successfully" Nov 8 00:26:28.718646 containerd[1736]: time="2025-11-08T00:26:28.718590940Z" level=info msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.787 [INFO][5575] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.788 [INFO][5575] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" iface="eth0" netns="/var/run/netns/cni-e8c43bff-6794-7dee-1a8f-69ccd7074920" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.788 [INFO][5575] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" iface="eth0" netns="/var/run/netns/cni-e8c43bff-6794-7dee-1a8f-69ccd7074920" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.788 [INFO][5575] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" iface="eth0" netns="/var/run/netns/cni-e8c43bff-6794-7dee-1a8f-69ccd7074920" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.788 [INFO][5575] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.788 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.831 [INFO][5584] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.831 [INFO][5584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.831 [INFO][5584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.839 [WARNING][5584] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.839 [INFO][5584] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.841 [INFO][5584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:28.845562 containerd[1736]: 2025-11-08 00:26:28.843 [INFO][5575] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:28.845562 containerd[1736]: time="2025-11-08T00:26:28.845479384Z" level=info msg="TearDown network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" successfully" Nov 8 00:26:28.846343 containerd[1736]: time="2025-11-08T00:26:28.845519684Z" level=info msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" returns successfully" Nov 8 00:26:28.847548 containerd[1736]: time="2025-11-08T00:26:28.847503196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-p994f,Uid:c8f9b682-1403-4984-8b7e-efa798fabe9d,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:28.876442 systemd[1]: run-netns-cni\x2de8c43bff\x2d6794\x2d7dee\x2d1a8f\x2d69ccd7074920.mount: Deactivated successfully. Nov 8 00:26:28.878504 containerd[1736]: time="2025-11-08T00:26:28.878406577Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:28.893181 containerd[1736]: time="2025-11-08T00:26:28.892998763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:28.893181 containerd[1736]: time="2025-11-08T00:26:28.893127064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:28.894013 kubelet[3265]: E1108 00:26:28.893926 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:28.894013 kubelet[3265]: E1108 00:26:28.893986 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:28.896583 kubelet[3265]: E1108 00:26:28.896487 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsj2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:28.897785 kubelet[3265]: E1108 00:26:28.897685 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:28.997227 systemd-networkd[1354]: cali3392f7e9cae: Gained IPv6LL Nov 8 00:26:29.014734 kubelet[3265]: E1108 00:26:29.014086 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:29.014734 kubelet[3265]: E1108 00:26:29.014637 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:29.045555 kubelet[3265]: I1108 00:26:29.045354 3265 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bm6fd" podStartSLOduration=47.045331957 podStartE2EDuration="47.045331957s" podCreationTimestamp="2025-11-08 00:25:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:29.027726354 +0000 UTC m=+53.451767996" watchObservedRunningTime="2025-11-08 00:26:29.045331957 +0000 UTC m=+53.469373599" Nov 8 00:26:29.064140 systemd-networkd[1354]: cali3257e177778: Link UP Nov 8 00:26:29.064416 systemd-networkd[1354]: cali3257e177778: Gained carrier Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.952 [INFO][5591] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0 calico-apiserver-847dc76596- calico-apiserver c8f9b682-1403-4984-8b7e-efa798fabe9d 1068 0 2025-11-08 00:25:52 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:847dc76596 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-036966ce4d calico-apiserver-847dc76596-p994f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3257e177778 [] [] }} ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.952 [INFO][5591] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.982 [INFO][5603] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" HandleID="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.982 [INFO][5603] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" HandleID="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b2d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-036966ce4d", "pod":"calico-apiserver-847dc76596-p994f", "timestamp":"2025-11-08 00:26:28.98277189 +0000 UTC"}, Hostname:"ci-4081.3.6-n-036966ce4d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.982 [INFO][5603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.983 [INFO][5603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.983 [INFO][5603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-036966ce4d' Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:28.992 [INFO][5603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.004 [INFO][5603] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.011 [INFO][5603] ipam/ipam.go 511: Trying affinity for 192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.013 [INFO][5603] ipam/ipam.go 158: Attempting to load block cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.018 [INFO][5603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.80.0/26 host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.018 [INFO][5603] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.80.0/26 handle="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.027 [INFO][5603] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.037 [INFO][5603] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.80.0/26 handle="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.053 [INFO][5603] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.80.9/26] block=192.168.80.0/26 handle="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.053 [INFO][5603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.80.9/26] handle="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" host="ci-4081.3.6-n-036966ce4d" Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.053 [INFO][5603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:29.089807 containerd[1736]: 2025-11-08 00:26:29.054 [INFO][5603] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.80.9/26] IPv6=[] ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" HandleID="k8s-pod-network.ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.056 [INFO][5591] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8f9b682-1403-4984-8b7e-efa798fabe9d", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"", Pod:"calico-apiserver-847dc76596-p994f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3257e177778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.056 [INFO][5591] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.80.9/32] ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.056 [INFO][5591] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3257e177778 ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.061 [INFO][5591] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.061 [INFO][5591] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8f9b682-1403-4984-8b7e-efa798fabe9d", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae", Pod:"calico-apiserver-847dc76596-p994f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3257e177778", MAC:"52:04:6d:dc:7b:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:29.090744 containerd[1736]: 2025-11-08 00:26:29.086 [INFO][5591] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae" Namespace="calico-apiserver" Pod="calico-apiserver-847dc76596-p994f" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:29.134767 containerd[1736]: time="2025-11-08T00:26:29.133268873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:29.134767 containerd[1736]: time="2025-11-08T00:26:29.133345374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:29.134767 containerd[1736]: time="2025-11-08T00:26:29.133387074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:29.134767 containerd[1736]: time="2025-11-08T00:26:29.133541275Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:29.169908 systemd[1]: Started cri-containerd-ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae.scope - libcontainer container ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae. Nov 8 00:26:29.233576 containerd[1736]: time="2025-11-08T00:26:29.233450161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-847dc76596-p994f,Uid:c8f9b682-1403-4984-8b7e-efa798fabe9d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae\"" Nov 8 00:26:29.235532 containerd[1736]: time="2025-11-08T00:26:29.235494673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:29.479580 containerd[1736]: time="2025-11-08T00:26:29.479523605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:29.482626 containerd[1736]: time="2025-11-08T00:26:29.482570623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:29.482751 containerd[1736]: time="2025-11-08T00:26:29.482667324Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:29.482882 kubelet[3265]: E1108 00:26:29.482842 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:29.482956 kubelet[3265]: E1108 00:26:29.482896 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:29.483116 kubelet[3265]: E1108 00:26:29.483059 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5nb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:29.484558 kubelet[3265]: E1108 00:26:29.484480 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:29.509302 systemd-networkd[1354]: calice000dd20c7: Gained IPv6LL Nov 8 00:26:30.014104 kubelet[3265]: E1108 00:26:30.013877 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:30.014587 kubelet[3265]: E1108 00:26:30.014499 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:30.085284 systemd-networkd[1354]: cali58dde8405e5: Gained IPv6LL Nov 8 00:26:30.724010 systemd-networkd[1354]: cali3257e177778: Gained IPv6LL Nov 8 00:26:31.016062 kubelet[3265]: E1108 00:26:31.015617 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:34.719002 containerd[1736]: time="2025-11-08T00:26:34.718946253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:34.966245 containerd[1736]: time="2025-11-08T00:26:34.966001603Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:34.969741 containerd[1736]: time="2025-11-08T00:26:34.969466823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:34.969741 containerd[1736]: time="2025-11-08T00:26:34.969578024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:34.970400 kubelet[3265]: E1108 00:26:34.970330 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:34.970400 kubelet[3265]: E1108 00:26:34.970392 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:34.971074 kubelet[3265]: E1108 00:26:34.970554 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b52ac106012e441c897a14b0417ed820,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:34.974015 containerd[1736]: time="2025-11-08T00:26:34.973909449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:35.217654 containerd[1736]: time="2025-11-08T00:26:35.217609779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:35.222738 containerd[1736]: time="2025-11-08T00:26:35.222414608Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:35.222738 containerd[1736]: time="2025-11-08T00:26:35.222503108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:35.222913 kubelet[3265]: E1108 00:26:35.222689 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:35.222913 kubelet[3265]: E1108 00:26:35.222778 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:35.223021 kubelet[3265]: E1108 00:26:35.222941 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:35.224404 kubelet[3265]: E1108 00:26:35.224348 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:26:35.708780 containerd[1736]: time="2025-11-08T00:26:35.708687561Z" level=info msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.781 [WARNING][5684] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2", Pod:"calico-apiserver-847dc76596-gs868", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3392f7e9cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.782 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.782 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" iface="eth0" netns="" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.782 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.782 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.812 [INFO][5693] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.812 [INFO][5693] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.812 [INFO][5693] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.829 [WARNING][5693] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.829 [INFO][5693] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.831 [INFO][5693] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:35.836178 containerd[1736]: 2025-11-08 00:26:35.833 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.836178 containerd[1736]: time="2025-11-08T00:26:35.835980708Z" level=info msg="TearDown network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" successfully" Nov 8 00:26:35.836178 containerd[1736]: time="2025-11-08T00:26:35.836011509Z" level=info msg="StopPodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" returns successfully" Nov 8 00:26:35.837298 containerd[1736]: time="2025-11-08T00:26:35.836769713Z" level=info msg="RemovePodSandbox for \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" Nov 8 00:26:35.837298 containerd[1736]: time="2025-11-08T00:26:35.836807013Z" level=info msg="Forcibly stopping sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\"" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.895 [WARNING][5708] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2", ResourceVersion:"1090", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"84b854b8c668f7d4ba396544b9b1bbd1fa020a4afbf5052040e010e810cc16b2", Pod:"calico-apiserver-847dc76596-gs868", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3392f7e9cae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.896 [INFO][5708] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.896 [INFO][5708] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" iface="eth0" netns="" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.896 [INFO][5708] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.896 [INFO][5708] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.931 [INFO][5716] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.931 [INFO][5716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.931 [INFO][5716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.949 [WARNING][5716] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.949 [INFO][5716] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" HandleID="k8s-pod-network.3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--gs868-eth0" Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.953 [INFO][5716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:35.957732 containerd[1736]: 2025-11-08 00:26:35.955 [INFO][5708] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f" Nov 8 00:26:35.959879 containerd[1736]: time="2025-11-08T00:26:35.957856724Z" level=info msg="TearDown network for sandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" successfully" Nov 8 00:26:35.974391 containerd[1736]: time="2025-11-08T00:26:35.974066819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:35.974715 containerd[1736]: time="2025-11-08T00:26:35.974578822Z" level=info msg="RemovePodSandbox \"3c07a92af66dfbbaab55ebdd320613fd885882884ace7a1b883d7c934847754f\" returns successfully" Nov 8 00:26:35.976042 containerd[1736]: time="2025-11-08T00:26:35.975720528Z" level=info msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.032 [WARNING][5731] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96be1a30-6407-4196-820d-f11b48aff85e", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492", Pod:"coredns-674b8bbfcf-bm6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice000dd20c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.033 [INFO][5731] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.033 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" iface="eth0" netns="" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.033 [INFO][5731] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.033 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.068 [INFO][5738] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.068 [INFO][5738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.068 [INFO][5738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.075 [WARNING][5738] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.075 [INFO][5738] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.076 [INFO][5738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.080073 containerd[1736]: 2025-11-08 00:26:36.078 [INFO][5731] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.081453 containerd[1736]: time="2025-11-08T00:26:36.080859646Z" level=info msg="TearDown network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" successfully" Nov 8 00:26:36.081453 containerd[1736]: time="2025-11-08T00:26:36.080898246Z" level=info msg="StopPodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" returns successfully" Nov 8 00:26:36.082184 containerd[1736]: time="2025-11-08T00:26:36.081963352Z" level=info msg="RemovePodSandbox for \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" Nov 8 00:26:36.082184 containerd[1736]: time="2025-11-08T00:26:36.082003552Z" level=info msg="Forcibly stopping sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\"" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.130 [WARNING][5752] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"96be1a30-6407-4196-820d-f11b48aff85e", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"bd8a6e957a171d2b91f647859b0fa9a09dca7ebb41536ad2e1e5ea054be4a492", Pod:"coredns-674b8bbfcf-bm6fd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calice000dd20c7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.131 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.131 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" iface="eth0" netns="" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.131 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.131 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.163 [INFO][5759] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.163 [INFO][5759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.164 [INFO][5759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.180 [WARNING][5759] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.180 [INFO][5759] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" HandleID="k8s-pod-network.b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--bm6fd-eth0" Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.182 [INFO][5759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.185497 containerd[1736]: 2025-11-08 00:26:36.183 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba" Nov 8 00:26:36.187272 containerd[1736]: time="2025-11-08T00:26:36.185792561Z" level=info msg="TearDown network for sandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" successfully" Nov 8 00:26:36.195675 containerd[1736]: time="2025-11-08T00:26:36.195511118Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:36.195675 containerd[1736]: time="2025-11-08T00:26:36.195624719Z" level=info msg="RemovePodSandbox \"b90bb29e952ceb4186b46e19dc8a02a032db8e6237309d20ef6fdadd1ddec4ba\" returns successfully" Nov 8 00:26:36.196882 containerd[1736]: time="2025-11-08T00:26:36.196515324Z" level=info msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.251 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ebfac5a4-d600-45d4-a79b-901e29ad49f6", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189", Pod:"coredns-674b8bbfcf-xhgqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief11604cfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.252 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.252 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" iface="eth0" netns="" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.252 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.252 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.288 [INFO][5780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.289 [INFO][5780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.289 [INFO][5780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.296 [WARNING][5780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.296 [INFO][5780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.299 [INFO][5780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.303413 containerd[1736]: 2025-11-08 00:26:36.301 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.308226 containerd[1736]: time="2025-11-08T00:26:36.304239156Z" level=info msg="TearDown network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" successfully" Nov 8 00:26:36.308226 containerd[1736]: time="2025-11-08T00:26:36.304380457Z" level=info msg="StopPodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" returns successfully" Nov 8 00:26:36.308226 containerd[1736]: time="2025-11-08T00:26:36.307054873Z" level=info msg="RemovePodSandbox for \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" Nov 8 00:26:36.308226 containerd[1736]: time="2025-11-08T00:26:36.307091873Z" level=info msg="Forcibly stopping sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\"" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.375 [WARNING][5794] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ebfac5a4-d600-45d4-a79b-901e29ad49f6", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"528f2fd1e4e61b717c90dcfb5dce7fb701acb754885d1e370294eda0cc39c189", Pod:"coredns-674b8bbfcf-xhgqn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.80.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calief11604cfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.375 [INFO][5794] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.375 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" iface="eth0" netns="" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.375 [INFO][5794] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.375 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.405 [INFO][5801] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.405 [INFO][5801] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.405 [INFO][5801] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.411 [WARNING][5801] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.411 [INFO][5801] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" HandleID="k8s-pod-network.8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Workload="ci--4081.3.6--n--036966ce4d-k8s-coredns--674b8bbfcf--xhgqn-eth0" Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.412 [INFO][5801] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.415422 containerd[1736]: 2025-11-08 00:26:36.413 [INFO][5794] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69" Nov 8 00:26:36.415422 containerd[1736]: time="2025-11-08T00:26:36.415196708Z" level=info msg="TearDown network for sandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" successfully" Nov 8 00:26:36.422968 containerd[1736]: time="2025-11-08T00:26:36.422920553Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:36.423140 containerd[1736]: time="2025-11-08T00:26:36.422985953Z" level=info msg="RemovePodSandbox \"8ad0f6527914a9dae304b3afd571706d59b0eef28b587109ccc2dfa53ac34e69\" returns successfully" Nov 8 00:26:36.423522 containerd[1736]: time="2025-11-08T00:26:36.423496156Z" level=info msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.464 [WARNING][5815] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0", GenerateName:"calico-kube-controllers-5ccd97dd97-", Namespace:"calico-system", SelfLink:"", UID:"415c1089-29e6-4262-b21f-188443e0b159", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ccd97dd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622", Pod:"calico-kube-controllers-5ccd97dd97-f9lsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf07c9633fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.465 [INFO][5815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.465 [INFO][5815] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" iface="eth0" netns="" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.465 [INFO][5815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.465 [INFO][5815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.488 [INFO][5822] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.488 [INFO][5822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.488 [INFO][5822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.496 [WARNING][5822] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.496 [INFO][5822] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.498 [INFO][5822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.503501 containerd[1736]: 2025-11-08 00:26:36.501 [INFO][5815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.504146 containerd[1736]: time="2025-11-08T00:26:36.503554726Z" level=info msg="TearDown network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" successfully" Nov 8 00:26:36.504146 containerd[1736]: time="2025-11-08T00:26:36.503589526Z" level=info msg="StopPodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" returns successfully" Nov 8 00:26:36.504239 containerd[1736]: time="2025-11-08T00:26:36.504177930Z" level=info msg="RemovePodSandbox for \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" Nov 8 00:26:36.504239 containerd[1736]: time="2025-11-08T00:26:36.504213730Z" level=info msg="Forcibly stopping sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\"" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.580 [WARNING][5836] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0", GenerateName:"calico-kube-controllers-5ccd97dd97-", Namespace:"calico-system", SelfLink:"", UID:"415c1089-29e6-4262-b21f-188443e0b159", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5ccd97dd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"fb05b2b2190ede4c39bc56d6c4863f894b2031e747de2b6d9822aa869978a622", Pod:"calico-kube-controllers-5ccd97dd97-f9lsk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.80.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"califf07c9633fd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.580 [INFO][5836] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.580 [INFO][5836] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" iface="eth0" netns="" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.580 [INFO][5836] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.580 [INFO][5836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.631 [INFO][5844] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.631 [INFO][5844] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.631 [INFO][5844] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.652 [WARNING][5844] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.652 [INFO][5844] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" HandleID="k8s-pod-network.e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--kube--controllers--5ccd97dd97--f9lsk-eth0" Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.654 [INFO][5844] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.659882 containerd[1736]: 2025-11-08 00:26:36.657 [INFO][5836] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b" Nov 8 00:26:36.661027 containerd[1736]: time="2025-11-08T00:26:36.659937943Z" level=info msg="TearDown network for sandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" successfully" Nov 8 00:26:36.669463 containerd[1736]: time="2025-11-08T00:26:36.669406798Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:36.669604 containerd[1736]: time="2025-11-08T00:26:36.669521999Z" level=info msg="RemovePodSandbox \"e0c040cd05967bb2447c62533686dda9bb7e87a778554b2cfc1b72f5caf26a7b\" returns successfully" Nov 8 00:26:36.670184 containerd[1736]: time="2025-11-08T00:26:36.670078002Z" level=info msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.734 [WARNING][5858] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.734 [INFO][5858] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.734 [INFO][5858] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" iface="eth0" netns="" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.734 [INFO][5858] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.734 [INFO][5858] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.765 [INFO][5866] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.765 [INFO][5866] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.765 [INFO][5866] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.773 [WARNING][5866] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.773 [INFO][5866] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.775 [INFO][5866] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.778795 containerd[1736]: 2025-11-08 00:26:36.776 [INFO][5858] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.781134 containerd[1736]: time="2025-11-08T00:26:36.778858940Z" level=info msg="TearDown network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" successfully" Nov 8 00:26:36.781134 containerd[1736]: time="2025-11-08T00:26:36.778892140Z" level=info msg="StopPodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" returns successfully" Nov 8 00:26:36.781134 containerd[1736]: time="2025-11-08T00:26:36.779500444Z" level=info msg="RemovePodSandbox for \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" Nov 8 00:26:36.781134 containerd[1736]: time="2025-11-08T00:26:36.779532144Z" level=info msg="Forcibly stopping sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\"" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.826 [WARNING][5881] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" WorkloadEndpoint="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.827 [INFO][5881] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.827 [INFO][5881] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" iface="eth0" netns="" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.827 [INFO][5881] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.827 [INFO][5881] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.858 [INFO][5888] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.858 [INFO][5888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.859 [INFO][5888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.872 [WARNING][5888] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.872 [INFO][5888] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" HandleID="k8s-pod-network.d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Workload="ci--4081.3.6--n--036966ce4d-k8s-whisker--54c754d646--rhhsj-eth0" Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.873 [INFO][5888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.876842 containerd[1736]: 2025-11-08 00:26:36.875 [INFO][5881] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e" Nov 8 00:26:36.877682 containerd[1736]: time="2025-11-08T00:26:36.876892615Z" level=info msg="TearDown network for sandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" successfully" Nov 8 00:26:36.884299 containerd[1736]: time="2025-11-08T00:26:36.884247158Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:36.884426 containerd[1736]: time="2025-11-08T00:26:36.884338058Z" level=info msg="RemovePodSandbox \"d89af26eed61dacaf5c71d6e5ed5e45b25bbde1e0c880fab829f72caa5681c2e\" returns successfully" Nov 8 00:26:36.885179 containerd[1736]: time="2025-11-08T00:26:36.885143263Z" level=info msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.933 [WARNING][5903] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8f9b682-1403-4984-8b7e-efa798fabe9d", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae", Pod:"calico-apiserver-847dc76596-p994f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3257e177778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.933 [INFO][5903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.933 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" iface="eth0" netns="" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.933 [INFO][5903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.933 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.966 [INFO][5910] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.966 [INFO][5910] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.966 [INFO][5910] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.975 [WARNING][5910] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.975 [INFO][5910] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.976 [INFO][5910] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:36.982258 containerd[1736]: 2025-11-08 00:26:36.978 [INFO][5903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:36.982258 containerd[1736]: time="2025-11-08T00:26:36.980257521Z" level=info msg="TearDown network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" successfully" Nov 8 00:26:36.982258 containerd[1736]: time="2025-11-08T00:26:36.980294521Z" level=info msg="StopPodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" returns successfully" Nov 8 00:26:36.982258 containerd[1736]: time="2025-11-08T00:26:36.980890224Z" level=info msg="RemovePodSandbox for \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" Nov 8 00:26:36.982258 containerd[1736]: time="2025-11-08T00:26:36.980925425Z" level=info msg="Forcibly stopping sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\"" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.038 [WARNING][5924] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0", GenerateName:"calico-apiserver-847dc76596-", Namespace:"calico-apiserver", SelfLink:"", UID:"c8f9b682-1403-4984-8b7e-efa798fabe9d", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"847dc76596", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"ae64f25cb55b7a1ad4e687b6a53a8cff221c8d6baf5c32531389a2250e2707ae", Pod:"calico-apiserver-847dc76596-p994f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3257e177778", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.040 [INFO][5924] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.040 [INFO][5924] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" iface="eth0" netns="" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.040 [INFO][5924] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.040 [INFO][5924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.075 [INFO][5932] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.076 [INFO][5932] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.076 [INFO][5932] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.083 [WARNING][5932] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.083 [INFO][5932] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" HandleID="k8s-pod-network.61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--847dc76596--p994f-eth0" Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.087 [INFO][5932] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.091140 containerd[1736]: 2025-11-08 00:26:37.089 [INFO][5924] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169" Nov 8 00:26:37.091140 containerd[1736]: time="2025-11-08T00:26:37.090848569Z" level=info msg="TearDown network for sandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" successfully" Nov 8 00:26:37.099852 containerd[1736]: time="2025-11-08T00:26:37.099802322Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:37.099947 containerd[1736]: time="2025-11-08T00:26:37.099880122Z" level=info msg="RemovePodSandbox \"61fb9f9d1a127a9e9d124d5c30bd5c01e0385d21e8e3cb845be26a32dbe74169\" returns successfully" Nov 8 00:26:37.100721 containerd[1736]: time="2025-11-08T00:26:37.100426025Z" level=info msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.162 [WARNING][5946] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88161698-8450-46cd-aabf-3650fadd565e", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77", Pod:"csi-node-driver-wpv6d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4131ce8790", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.163 [INFO][5946] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.163 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" iface="eth0" netns="" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.163 [INFO][5946] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.163 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.205 [INFO][5953] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.207 [INFO][5953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.207 [INFO][5953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.224 [WARNING][5953] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.224 [INFO][5953] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.226 [INFO][5953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.230000 containerd[1736]: 2025-11-08 00:26:37.228 [INFO][5946] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.230620 containerd[1736]: time="2025-11-08T00:26:37.230062585Z" level=info msg="TearDown network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" successfully" Nov 8 00:26:37.230620 containerd[1736]: time="2025-11-08T00:26:37.230093586Z" level=info msg="StopPodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" returns successfully" Nov 8 00:26:37.231769 containerd[1736]: time="2025-11-08T00:26:37.230912690Z" level=info msg="RemovePodSandbox for \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" Nov 8 00:26:37.231769 containerd[1736]: time="2025-11-08T00:26:37.230954391Z" level=info msg="Forcibly stopping sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\"" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.280 [WARNING][5967] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"88161698-8450-46cd-aabf-3650fadd565e", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"eb4832bdf74bb867781a4f522be84dd1fd978de39ba363e0c66e66057fa9dd77", Pod:"csi-node-driver-wpv6d", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.80.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic4131ce8790", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.280 [INFO][5967] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.280 [INFO][5967] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" iface="eth0" netns="" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.280 [INFO][5967] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.280 [INFO][5967] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.334 [INFO][5974] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.336 [INFO][5974] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.336 [INFO][5974] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.345 [WARNING][5974] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.345 [INFO][5974] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" HandleID="k8s-pod-network.99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Workload="ci--4081.3.6--n--036966ce4d-k8s-csi--node--driver--wpv6d-eth0" Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.383 [INFO][5974] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.388055 containerd[1736]: 2025-11-08 00:26:37.385 [INFO][5967] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109" Nov 8 00:26:37.388055 containerd[1736]: time="2025-11-08T00:26:37.387898211Z" level=info msg="TearDown network for sandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" successfully" Nov 8 00:26:37.396993 containerd[1736]: time="2025-11-08T00:26:37.396586262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:37.396993 containerd[1736]: time="2025-11-08T00:26:37.396691062Z" level=info msg="RemovePodSandbox \"99004b9c0e3510d2f44f2b1c9d11a8061b973790e8d654da547c2ef72fafa109\" returns successfully" Nov 8 00:26:37.397314 containerd[1736]: time="2025-11-08T00:26:37.397280766Z" level=info msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.449 [WARNING][5988] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0", GenerateName:"calico-apiserver-579774f8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bf1c931-a10c-42c4-bece-79d61b489c62", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579774f8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce", Pod:"calico-apiserver-579774f8c5-5sn5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a93201f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.450 [INFO][5988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.450 [INFO][5988] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" iface="eth0" netns="" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.450 [INFO][5988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.450 [INFO][5988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.480 [INFO][5996] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.480 [INFO][5996] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.480 [INFO][5996] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.487 [WARNING][5996] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.487 [INFO][5996] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.489 [INFO][5996] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.493821 containerd[1736]: 2025-11-08 00:26:37.491 [INFO][5988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.495076 containerd[1736]: time="2025-11-08T00:26:37.493880832Z" level=info msg="TearDown network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" successfully" Nov 8 00:26:37.495076 containerd[1736]: time="2025-11-08T00:26:37.493913232Z" level=info msg="StopPodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" returns successfully" Nov 8 00:26:37.495076 containerd[1736]: time="2025-11-08T00:26:37.494507536Z" level=info msg="RemovePodSandbox for \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" Nov 8 00:26:37.495076 containerd[1736]: time="2025-11-08T00:26:37.494535336Z" level=info msg="Forcibly stopping sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\"" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.549 [WARNING][6010] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0", GenerateName:"calico-apiserver-579774f8c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4bf1c931-a10c-42c4-bece-79d61b489c62", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"579774f8c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"782a4d4ad3face84e5969448e1816741ac956f0b097355ebdcc7452c2c7f58ce", Pod:"calico-apiserver-579774f8c5-5sn5r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.80.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77a93201f5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.549 [INFO][6010] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.550 [INFO][6010] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" iface="eth0" netns="" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.550 [INFO][6010] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.550 [INFO][6010] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.582 [INFO][6017] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.582 [INFO][6017] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.582 [INFO][6017] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.589 [WARNING][6017] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.589 [INFO][6017] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" HandleID="k8s-pod-network.5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Workload="ci--4081.3.6--n--036966ce4d-k8s-calico--apiserver--579774f8c5--5sn5r-eth0" Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.590 [INFO][6017] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.598747 containerd[1736]: 2025-11-08 00:26:37.596 [INFO][6010] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475" Nov 8 00:26:37.598747 containerd[1736]: time="2025-11-08T00:26:37.598453145Z" level=info msg="TearDown network for sandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" successfully" Nov 8 00:26:37.608894 containerd[1736]: time="2025-11-08T00:26:37.608808406Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:37.609742 containerd[1736]: time="2025-11-08T00:26:37.609096208Z" level=info msg="RemovePodSandbox \"5f511ccb2ba0ad6aa22786ea1ed25f0284ba1093b3f0967f23424d70a2430475\" returns successfully" Nov 8 00:26:37.609742 containerd[1736]: time="2025-11-08T00:26:37.609607911Z" level=info msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.657 [WARNING][6031] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5435b4b0-7a30-4d83-a845-ff6ed8ff1797", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7", Pod:"goldmane-666569f655-9cc5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali58dde8405e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.658 [INFO][6031] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.658 [INFO][6031] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" iface="eth0" netns="" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.658 [INFO][6031] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.658 [INFO][6031] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.693 [INFO][6039] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.693 [INFO][6039] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.693 [INFO][6039] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.704 [WARNING][6039] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.704 [INFO][6039] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.705 [INFO][6039] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.709655 containerd[1736]: 2025-11-08 00:26:37.707 [INFO][6031] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.711788 containerd[1736]: time="2025-11-08T00:26:37.710001399Z" level=info msg="TearDown network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" successfully" Nov 8 00:26:37.711788 containerd[1736]: time="2025-11-08T00:26:37.710038499Z" level=info msg="StopPodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" returns successfully" Nov 8 00:26:37.711788 containerd[1736]: time="2025-11-08T00:26:37.710942405Z" level=info msg="RemovePodSandbox for \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" Nov 8 00:26:37.711788 containerd[1736]: time="2025-11-08T00:26:37.710980405Z" level=info msg="Forcibly stopping sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\"" Nov 8 00:26:37.723714 containerd[1736]: time="2025-11-08T00:26:37.723298477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.765 [WARNING][6053] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"5435b4b0-7a30-4d83-a845-ff6ed8ff1797", ResourceVersion:"1106", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-036966ce4d", ContainerID:"195822c297fdbd110a0050a2f800e1ead322d48fc6df3d05e833b837df4893b7", Pod:"goldmane-666569f655-9cc5c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.80.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali58dde8405e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.766 [INFO][6053] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.766 [INFO][6053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" iface="eth0" netns="" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.766 [INFO][6053] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.766 [INFO][6053] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.797 [INFO][6060] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.797 [INFO][6060] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.797 [INFO][6060] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.808 [WARNING][6060] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.808 [INFO][6060] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" HandleID="k8s-pod-network.916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Workload="ci--4081.3.6--n--036966ce4d-k8s-goldmane--666569f655--9cc5c-eth0" Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.809 [INFO][6060] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:37.813316 containerd[1736]: 2025-11-08 00:26:37.811 [INFO][6053] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b" Nov 8 00:26:37.814030 containerd[1736]: time="2025-11-08T00:26:37.813410105Z" level=info msg="TearDown network for sandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" successfully" Nov 8 00:26:37.824224 containerd[1736]: time="2025-11-08T00:26:37.823261263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:37.824224 containerd[1736]: time="2025-11-08T00:26:37.823337364Z" level=info msg="RemovePodSandbox \"916ef30e065d7b3c103c6cca127f02c5617d2293047c06e942d206b2d3d93d8b\" returns successfully" Nov 8 00:26:37.977304 containerd[1736]: time="2025-11-08T00:26:37.977152265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:37.982096 containerd[1736]: time="2025-11-08T00:26:37.981600692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:26:37.982096 containerd[1736]: time="2025-11-08T00:26:37.981685492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:26:37.982386 kubelet[3265]: E1108 00:26:37.981992 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:37.982386 kubelet[3265]: E1108 00:26:37.982058 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:37.984164 kubelet[3265]: E1108 00:26:37.983838 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:37.986741 containerd[1736]: time="2025-11-08T00:26:37.986687721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:26:38.237842 containerd[1736]: time="2025-11-08T00:26:38.237640093Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:38.240847 containerd[1736]: time="2025-11-08T00:26:38.240781811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:26:38.241203 containerd[1736]: time="2025-11-08T00:26:38.241008212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:26:38.241777 kubelet[3265]: E1108 00:26:38.241502 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:38.241777 kubelet[3265]: E1108 00:26:38.241567 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:38.242268 kubelet[3265]: E1108 00:26:38.242117 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:38.243621 kubelet[3265]: E1108 00:26:38.243553 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:39.721825 containerd[1736]: time="2025-11-08T00:26:39.721578093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:39.965747 containerd[1736]: time="2025-11-08T00:26:39.965668324Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:39.968263 containerd[1736]: time="2025-11-08T00:26:39.968209339Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:39.968393 containerd[1736]: time="2025-11-08T00:26:39.968322740Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:39.968613 kubelet[3265]: E1108 00:26:39.968552 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:39.969047 kubelet[3265]: E1108 00:26:39.968616 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:39.970213 containerd[1736]: time="2025-11-08T00:26:39.969244745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:39.971039 kubelet[3265]: E1108 00:26:39.970974 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:39.972411 kubelet[3265]: E1108 00:26:39.972199 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:40.221926 containerd[1736]: time="2025-11-08T00:26:40.221869126Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:40.224634 containerd[1736]: time="2025-11-08T00:26:40.224492741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:40.226430 containerd[1736]: time="2025-11-08T00:26:40.226376252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:40.226707 kubelet[3265]: E1108 00:26:40.226650 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:40.226786 kubelet[3265]: E1108 00:26:40.226728 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:40.226968 kubelet[3265]: E1108 00:26:40.226906 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z8kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:40.228614 kubelet[3265]: E1108 00:26:40.228576 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:41.719741 containerd[1736]: time="2025-11-08T00:26:41.719676208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:41.968733 containerd[1736]: time="2025-11-08T00:26:41.968336665Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:41.972272 containerd[1736]: time="2025-11-08T00:26:41.972080287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:41.972272 containerd[1736]: time="2025-11-08T00:26:41.972109788Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:41.972850 kubelet[3265]: E1108 00:26:41.972464 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:41.972850 kubelet[3265]: E1108 00:26:41.972530 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:41.972850 kubelet[3265]: E1108 00:26:41.972711 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kgh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:41.974585 kubelet[3265]: E1108 00:26:41.974519 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:42.719383 containerd[1736]: time="2025-11-08T00:26:42.719264168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:42.960879 containerd[1736]: time="2025-11-08T00:26:42.960820784Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:42.963884 containerd[1736]: time="2025-11-08T00:26:42.963825302Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:42.963985 containerd[1736]: time="2025-11-08T00:26:42.963941103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:42.964347 kubelet[3265]: E1108 00:26:42.964218 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:42.964347 kubelet[3265]: E1108 00:26:42.964291 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:42.965372 kubelet[3265]: E1108 00:26:42.964839 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsj2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:42.966045 kubelet[3265]: E1108 00:26:42.966009 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:42.966686 containerd[1736]: time="2025-11-08T00:26:42.966562618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:43.216631 containerd[1736]: time="2025-11-08T00:26:43.216571884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:43.226540 containerd[1736]: time="2025-11-08T00:26:43.226476642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:43.226721 containerd[1736]: time="2025-11-08T00:26:43.226514342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:43.226930 kubelet[3265]: E1108 00:26:43.226865 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:43.227351 kubelet[3265]: E1108 00:26:43.226946 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:43.227351 kubelet[3265]: E1108 00:26:43.227196 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5nb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:43.229724 kubelet[3265]: E1108 00:26:43.228563 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:26:50.718935 kubelet[3265]: E1108 00:26:50.718746 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:26:50.722332 kubelet[3265]: E1108 00:26:50.722283 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:26:50.722498 kubelet[3265]: E1108 00:26:50.722462 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:26:51.722649 kubelet[3265]: E1108 00:26:51.722546 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:26:53.720675 kubelet[3265]: E1108 00:26:53.720584 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:26:57.722550 kubelet[3265]: E1108 00:26:57.722496 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:26:58.719096 kubelet[3265]: E1108 00:26:58.718635 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:27:02.722399 containerd[1736]: time="2025-11-08T00:27:02.720643708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:02.974204 containerd[1736]: time="2025-11-08T00:27:02.974043087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:02.977899 containerd[1736]: time="2025-11-08T00:27:02.977731008Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:02.977899 containerd[1736]: time="2025-11-08T00:27:02.977838309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:02.978098 kubelet[3265]: E1108 00:27:02.978025 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:02.978491 kubelet[3265]: E1108 00:27:02.978093 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:02.978491 kubelet[3265]: E1108 00:27:02.978375 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:02.979838 kubelet[3265]: E1108 00:27:02.979780 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:27:02.980543 containerd[1736]: time="2025-11-08T00:27:02.980295523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:03.221546 containerd[1736]: time="2025-11-08T00:27:03.221236729Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:03.224407 containerd[1736]: time="2025-11-08T00:27:03.224125646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:03.226312 containerd[1736]: time="2025-11-08T00:27:03.224354248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:03.226448 kubelet[3265]: E1108 00:27:03.225131 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:03.226448 kubelet[3265]: E1108 00:27:03.225193 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:03.226448 kubelet[3265]: E1108 00:27:03.225346 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b52ac106012e441c897a14b0417ed820,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:03.228373 containerd[1736]: time="2025-11-08T00:27:03.228350671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:03.471414 containerd[1736]: time="2025-11-08T00:27:03.471337989Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:03.475885 containerd[1736]: time="2025-11-08T00:27:03.475763915Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:03.477557 containerd[1736]: time="2025-11-08T00:27:03.475887516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:03.477628 kubelet[3265]: E1108 00:27:03.476056 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:03.477628 kubelet[3265]: E1108 00:27:03.476105 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:03.477628 kubelet[3265]: E1108 00:27:03.476256 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:03.477960 kubelet[3265]: E1108 00:27:03.477922 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:27:04.720290 containerd[1736]: time="2025-11-08T00:27:04.720238078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:05.008282 containerd[1736]: time="2025-11-08T00:27:05.008132059Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:05.011026 containerd[1736]: time="2025-11-08T00:27:05.010968775Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:05.011299 containerd[1736]: time="2025-11-08T00:27:05.010993275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:05.011813 kubelet[3265]: E1108 00:27:05.011468 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:05.011813 kubelet[3265]: E1108 00:27:05.011528 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:05.013001 containerd[1736]: time="2025-11-08T00:27:05.011901581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:05.013173 kubelet[3265]: E1108 00:27:05.013076 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsj2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:05.014368 kubelet[3265]: E1108 00:27:05.014334 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:27:05.272686 containerd[1736]: time="2025-11-08T00:27:05.272538802Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:05.277896 containerd[1736]: time="2025-11-08T00:27:05.277843233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:05.278020 containerd[1736]: time="2025-11-08T00:27:05.277935033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:05.278186 kubelet[3265]: E1108 00:27:05.278144 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:05.278264 kubelet[3265]: E1108 00:27:05.278198 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:05.278407 kubelet[3265]: E1108 00:27:05.278364 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:05.281710 containerd[1736]: time="2025-11-08T00:27:05.281189952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:05.536150 containerd[1736]: time="2025-11-08T00:27:05.535989040Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:05.539447 containerd[1736]: time="2025-11-08T00:27:05.538809956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:05.539447 containerd[1736]: time="2025-11-08T00:27:05.538867356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:05.539667 kubelet[3265]: E1108 00:27:05.539122 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:05.539667 kubelet[3265]: E1108 00:27:05.539179 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:05.539667 kubelet[3265]: E1108 00:27:05.539356 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:05.540920 kubelet[3265]: E1108 00:27:05.540872 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:27:05.722047 containerd[1736]: time="2025-11-08T00:27:05.721492522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:05.964043 containerd[1736]: time="2025-11-08T00:27:05.963870137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:05.967929 containerd[1736]: time="2025-11-08T00:27:05.967550758Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:05.967929 containerd[1736]: time="2025-11-08T00:27:05.967668259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:05.968628 kubelet[3265]: E1108 00:27:05.968299 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:05.968628 kubelet[3265]: E1108 00:27:05.968404 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:05.969017 kubelet[3265]: E1108 00:27:05.968823 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z8kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:05.970361 kubelet[3265]: E1108 00:27:05.970299 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:27:10.720437 containerd[1736]: time="2025-11-08T00:27:10.720360369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:10.982038 containerd[1736]: time="2025-11-08T00:27:10.981879080Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:10.986727 containerd[1736]: time="2025-11-08T00:27:10.984481895Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:10.986727 containerd[1736]: time="2025-11-08T00:27:10.984586596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:10.986727 containerd[1736]: time="2025-11-08T00:27:10.985798303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:10.986969 kubelet[3265]: E1108 00:27:10.984820 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:10.986969 kubelet[3265]: E1108 00:27:10.984879 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:10.986969 kubelet[3265]: E1108 00:27:10.985123 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5nb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:10.987551 kubelet[3265]: E1108 00:27:10.987362 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:27:11.246667 containerd[1736]: time="2025-11-08T00:27:11.245658005Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:11.248976 containerd[1736]: time="2025-11-08T00:27:11.248908523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:11.249307 containerd[1736]: time="2025-11-08T00:27:11.248957224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:11.249719 kubelet[3265]: E1108 00:27:11.249579 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:11.250003 kubelet[3265]: E1108 00:27:11.249677 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:11.250247 kubelet[3265]: E1108 00:27:11.250156 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kgh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:11.251819 kubelet[3265]: E1108 00:27:11.251777 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:27:15.724999 kubelet[3265]: E1108 00:27:15.722829 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:27:16.721721 kubelet[3265]: E1108 00:27:16.720540 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:27:18.720965 kubelet[3265]: E1108 00:27:18.720918 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:27:18.721508 kubelet[3265]: E1108 00:27:18.721054 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:27:19.723731 kubelet[3265]: E1108 00:27:19.720536 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:27:22.558170 systemd[1]: Started sshd@7-10.200.8.41:22-10.200.16.10:49404.service - OpenSSH per-connection server daemon (10.200.16.10:49404). Nov 8 00:27:22.721473 kubelet[3265]: E1108 00:27:22.721420 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:27:23.188108 sshd[6117]: Accepted publickey for core from 10.200.16.10 port 49404 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:23.189888 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:23.194998 systemd-logind[1708]: New session 10 of user core. Nov 8 00:27:23.202874 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:27:23.775961 sshd[6117]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:23.781069 systemd[1]: sshd@7-10.200.8.41:22-10.200.16.10:49404.service: Deactivated successfully. Nov 8 00:27:23.783373 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:27:23.784829 systemd-logind[1708]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:27:23.788124 systemd-logind[1708]: Removed session 10. Nov 8 00:27:25.723986 kubelet[3265]: E1108 00:27:25.723933 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:27:28.721174 kubelet[3265]: E1108 00:27:28.721114 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:27:28.721796 kubelet[3265]: E1108 00:27:28.721067 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:27:28.897248 systemd[1]: Started sshd@8-10.200.8.41:22-10.200.16.10:49412.service - OpenSSH per-connection server daemon (10.200.16.10:49412). Nov 8 00:27:29.526306 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 49412 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:29.527833 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:29.532585 systemd-logind[1708]: New session 11 of user core. Nov 8 00:27:29.538870 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:27:30.136984 sshd[6157]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:30.141405 systemd-logind[1708]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:27:30.143476 systemd[1]: sshd@8-10.200.8.41:22-10.200.16.10:49412.service: Deactivated successfully. Nov 8 00:27:30.148193 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:27:30.149793 systemd-logind[1708]: Removed session 11. Nov 8 00:27:31.724688 kubelet[3265]: E1108 00:27:31.724545 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:27:33.721688 kubelet[3265]: E1108 00:27:33.721524 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:27:34.719088 kubelet[3265]: E1108 00:27:34.719034 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:27:34.719305 kubelet[3265]: E1108 00:27:34.719110 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:27:35.254866 systemd[1]: Started sshd@9-10.200.8.41:22-10.200.16.10:58470.service - OpenSSH per-connection server daemon (10.200.16.10:58470). Nov 8 00:27:35.888943 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 58470 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:35.891318 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:35.900605 systemd-logind[1708]: New session 12 of user core. Nov 8 00:27:35.907250 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:27:36.462981 sshd[6172]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:36.470155 systemd[1]: sshd@9-10.200.8.41:22-10.200.16.10:58470.service: Deactivated successfully. Nov 8 00:27:36.470462 systemd-logind[1708]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:27:36.473516 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:27:36.475598 systemd-logind[1708]: Removed session 12. Nov 8 00:27:36.585226 systemd[1]: Started sshd@10-10.200.8.41:22-10.200.16.10:58484.service - OpenSSH per-connection server daemon (10.200.16.10:58484). Nov 8 00:27:37.228856 sshd[6188]: Accepted publickey for core from 10.200.16.10 port 58484 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:37.232367 sshd[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:37.238874 systemd-logind[1708]: New session 13 of user core. Nov 8 00:27:37.244176 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:27:37.825947 sshd[6188]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:37.830985 systemd-logind[1708]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:27:37.832156 systemd[1]: sshd@10-10.200.8.41:22-10.200.16.10:58484.service: Deactivated successfully. Nov 8 00:27:37.837478 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:27:37.845173 systemd-logind[1708]: Removed session 13. Nov 8 00:27:37.945128 systemd[1]: Started sshd@11-10.200.8.41:22-10.200.16.10:58490.service - OpenSSH per-connection server daemon (10.200.16.10:58490). Nov 8 00:27:38.577726 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 58490 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:38.579616 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:38.588325 systemd-logind[1708]: New session 14 of user core. Nov 8 00:27:38.596889 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:27:39.109621 sshd[6199]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:39.113415 systemd-logind[1708]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:27:39.114026 systemd[1]: sshd@11-10.200.8.41:22-10.200.16.10:58490.service: Deactivated successfully. Nov 8 00:27:39.117984 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:27:39.119609 systemd-logind[1708]: Removed session 14. Nov 8 00:27:39.721424 kubelet[3265]: E1108 00:27:39.721369 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:27:41.723263 kubelet[3265]: E1108 00:27:41.723212 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:27:42.721066 kubelet[3265]: E1108 00:27:42.720906 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:27:44.226283 systemd[1]: Started sshd@12-10.200.8.41:22-10.200.16.10:46352.service - OpenSSH per-connection server daemon (10.200.16.10:46352). Nov 8 00:27:44.870728 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 46352 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:44.871436 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:44.878474 systemd-logind[1708]: New session 15 of user core. Nov 8 00:27:44.883979 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:27:45.431783 sshd[6224]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:45.435664 systemd[1]: sshd@12-10.200.8.41:22-10.200.16.10:46352.service: Deactivated successfully. Nov 8 00:27:45.438367 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:27:45.439214 systemd-logind[1708]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:27:45.440240 systemd-logind[1708]: Removed session 15. Nov 8 00:27:45.723089 kubelet[3265]: E1108 00:27:45.721806 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:27:46.719041 containerd[1736]: time="2025-11-08T00:27:46.718886632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:46.971078 containerd[1736]: time="2025-11-08T00:27:46.970533406Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:46.976323 containerd[1736]: time="2025-11-08T00:27:46.976104843Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:46.976323 containerd[1736]: time="2025-11-08T00:27:46.976234143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:46.976501 kubelet[3265]: E1108 00:27:46.976417 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:46.976501 kubelet[3265]: E1108 00:27:46.976483 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:46.976987 kubelet[3265]: E1108 00:27:46.976643 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:46.980403 containerd[1736]: time="2025-11-08T00:27:46.980371571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:47.219831 containerd[1736]: time="2025-11-08T00:27:47.219768263Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:47.223659 containerd[1736]: time="2025-11-08T00:27:47.223518488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:47.223783 containerd[1736]: time="2025-11-08T00:27:47.223638489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:47.224408 kubelet[3265]: E1108 00:27:47.223971 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:47.224408 kubelet[3265]: E1108 00:27:47.224035 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:47.224408 kubelet[3265]: E1108 00:27:47.224193 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-56rtw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wpv6d_calico-system(88161698-8450-46cd-aabf-3650fadd565e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:47.225690 kubelet[3265]: E1108 00:27:47.225637 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:27:47.721120 containerd[1736]: time="2025-11-08T00:27:47.721069196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:48.071876 containerd[1736]: time="2025-11-08T00:27:48.071666728Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:48.074905 containerd[1736]: time="2025-11-08T00:27:48.074678748Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:48.074905 containerd[1736]: time="2025-11-08T00:27:48.074802248Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:48.075169 kubelet[3265]: E1108 00:27:48.075062 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:48.075570 kubelet[3265]: E1108 00:27:48.075210 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:48.076728 kubelet[3265]: E1108 00:27:48.076490 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lsj2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9cc5c_calico-system(5435b4b0-7a30-4d83-a845-ff6ed8ff1797): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:48.078192 kubelet[3265]: E1108 00:27:48.078144 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:27:48.719593 kubelet[3265]: E1108 00:27:48.719279 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:27:50.554743 systemd[1]: Started sshd@13-10.200.8.41:22-10.200.16.10:59196.service - OpenSSH per-connection server daemon (10.200.16.10:59196). Nov 8 00:27:51.184275 sshd[6239]: Accepted publickey for core from 10.200.16.10 port 59196 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:51.187008 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:51.193431 systemd-logind[1708]: New session 16 of user core. Nov 8 00:27:51.200118 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:27:51.695146 sshd[6239]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:51.698504 systemd-logind[1708]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:27:51.698897 systemd[1]: sshd@13-10.200.8.41:22-10.200.16.10:59196.service: Deactivated successfully. Nov 8 00:27:51.701396 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:27:51.703363 systemd-logind[1708]: Removed session 16. Nov 8 00:27:52.720960 containerd[1736]: time="2025-11-08T00:27:52.720909021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:52.987481 containerd[1736]: time="2025-11-08T00:27:52.987327384Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:52.990177 containerd[1736]: time="2025-11-08T00:27:52.990078700Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:52.990177 containerd[1736]: time="2025-11-08T00:27:52.990127400Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:52.990503 kubelet[3265]: E1108 00:27:52.990455 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:52.991119 kubelet[3265]: E1108 00:27:52.990520 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:52.991119 kubelet[3265]: E1108 00:27:52.990706 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-72wkc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-gs868_calico-apiserver(f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:52.992030 kubelet[3265]: E1108 00:27:52.991819 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:27:53.720773 containerd[1736]: time="2025-11-08T00:27:53.720478584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:53.974814 containerd[1736]: time="2025-11-08T00:27:53.973785769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:53.977296 containerd[1736]: time="2025-11-08T00:27:53.977233089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:53.977422 containerd[1736]: time="2025-11-08T00:27:53.977351090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:53.977631 kubelet[3265]: E1108 00:27:53.977581 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:53.977780 kubelet[3265]: E1108 00:27:53.977650 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:53.979269 kubelet[3265]: E1108 00:27:53.979107 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5nb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-847dc76596-p994f_calico-apiserver(c8f9b682-1403-4984-8b7e-efa798fabe9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:53.980513 kubelet[3265]: E1108 00:27:53.980471 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:27:54.719542 containerd[1736]: time="2025-11-08T00:27:54.719488543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:54.980413 containerd[1736]: time="2025-11-08T00:27:54.980061171Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:54.983593 containerd[1736]: time="2025-11-08T00:27:54.983402090Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:54.983593 containerd[1736]: time="2025-11-08T00:27:54.983512191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:54.986380 kubelet[3265]: E1108 00:27:54.984048 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:54.986380 kubelet[3265]: E1108 00:27:54.984109 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:54.986380 kubelet[3265]: E1108 00:27:54.984293 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b52ac106012e441c897a14b0417ed820,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:54.988683 containerd[1736]: time="2025-11-08T00:27:54.988383720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:55.231290 containerd[1736]: time="2025-11-08T00:27:55.231052243Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:55.234113 containerd[1736]: time="2025-11-08T00:27:55.233887359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:55.234113 containerd[1736]: time="2025-11-08T00:27:55.234004160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:55.234933 kubelet[3265]: E1108 00:27:55.234344 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:55.234933 kubelet[3265]: E1108 00:27:55.234542 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:55.234933 kubelet[3265]: E1108 00:27:55.234837 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtpmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7b877f7c4d-czsrv_calico-system(eb4448af-cf19-4ce5-bbad-22d98ef7ab44): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:55.236390 kubelet[3265]: E1108 00:27:55.236341 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:27:56.719555 containerd[1736]: time="2025-11-08T00:27:56.719511980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:56.811040 systemd[1]: Started sshd@14-10.200.8.41:22-10.200.16.10:59200.service - OpenSSH per-connection server daemon (10.200.16.10:59200). Nov 8 00:27:56.975823 containerd[1736]: time="2025-11-08T00:27:56.975667902Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:56.980523 containerd[1736]: time="2025-11-08T00:27:56.980402544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:56.980523 containerd[1736]: time="2025-11-08T00:27:56.980446444Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:56.980708 kubelet[3265]: E1108 00:27:56.980655 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:56.981088 kubelet[3265]: E1108 00:27:56.980726 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:56.981088 kubelet[3265]: E1108 00:27:56.980905 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9z8kg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5ccd97dd97-f9lsk_calico-system(415c1089-29e6-4262-b21f-188443e0b159): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:56.982523 kubelet[3265]: E1108 00:27:56.982480 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:27:57.454564 sshd[6267]: Accepted publickey for core from 10.200.16.10 port 59200 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:57.457993 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:57.467209 systemd-logind[1708]: New session 17 of user core. Nov 8 00:27:57.473900 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:27:58.013966 sshd[6267]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:58.018526 systemd-logind[1708]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:27:58.019588 systemd[1]: sshd@14-10.200.8.41:22-10.200.16.10:59200.service: Deactivated successfully. Nov 8 00:27:58.023452 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:27:58.027642 systemd-logind[1708]: Removed session 17. Nov 8 00:27:58.719447 kubelet[3265]: E1108 00:27:58.719362 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:27:59.725679 kubelet[3265]: E1108 00:27:59.725460 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:28:03.133075 systemd[1]: Started sshd@15-10.200.8.41:22-10.200.16.10:32860.service - OpenSSH per-connection server daemon (10.200.16.10:32860). Nov 8 00:28:03.721779 containerd[1736]: time="2025-11-08T00:28:03.721729625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:28:03.762814 sshd[6311]: Accepted publickey for core from 10.200.16.10 port 32860 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:03.763841 sshd[6311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:03.770059 systemd-logind[1708]: New session 18 of user core. Nov 8 00:28:03.775893 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:28:03.968244 containerd[1736]: time="2025-11-08T00:28:03.968159363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:28:03.973079 containerd[1736]: time="2025-11-08T00:28:03.972749403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:28:03.973079 containerd[1736]: time="2025-11-08T00:28:03.972798903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:28:03.974509 kubelet[3265]: E1108 00:28:03.973363 3265 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:03.974509 kubelet[3265]: E1108 00:28:03.973419 3265 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:28:03.974509 kubelet[3265]: E1108 00:28:03.973574 3265 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9kgh5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-579774f8c5-5sn5r_calico-apiserver(4bf1c931-a10c-42c4-bece-79d61b489c62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:28:03.975300 kubelet[3265]: E1108 00:28:03.975125 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:28:04.314989 sshd[6311]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:04.320657 systemd[1]: sshd@15-10.200.8.41:22-10.200.16.10:32860.service: Deactivated successfully. Nov 8 00:28:04.324588 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:28:04.327142 systemd-logind[1708]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:28:04.329268 systemd-logind[1708]: Removed session 18. Nov 8 00:28:04.431801 systemd[1]: Started sshd@16-10.200.8.41:22-10.200.16.10:32870.service - OpenSSH per-connection server daemon (10.200.16.10:32870). Nov 8 00:28:05.056488 sshd[6324]: Accepted publickey for core from 10.200.16.10 port 32870 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:05.060057 sshd[6324]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:05.068792 systemd-logind[1708]: New session 19 of user core. Nov 8 00:28:05.076874 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:28:05.718180 sshd[6324]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:05.723632 systemd-logind[1708]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:28:05.725323 systemd[1]: sshd@16-10.200.8.41:22-10.200.16.10:32870.service: Deactivated successfully. Nov 8 00:28:05.729329 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:28:05.731050 systemd-logind[1708]: Removed session 19. Nov 8 00:28:05.841296 systemd[1]: Started sshd@17-10.200.8.41:22-10.200.16.10:32878.service - OpenSSH per-connection server daemon (10.200.16.10:32878). Nov 8 00:28:06.483298 sshd[6335]: Accepted publickey for core from 10.200.16.10 port 32878 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:06.484410 sshd[6335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:06.494006 systemd-logind[1708]: New session 20 of user core. Nov 8 00:28:06.500612 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:28:06.721784 kubelet[3265]: E1108 00:28:06.719178 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:28:07.720414 kubelet[3265]: E1108 00:28:07.720301 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:28:07.776127 sshd[6335]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:07.781678 systemd-logind[1708]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:28:07.782954 systemd[1]: sshd@17-10.200.8.41:22-10.200.16.10:32878.service: Deactivated successfully. Nov 8 00:28:07.786685 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:28:07.788616 systemd-logind[1708]: Removed session 20. Nov 8 00:28:07.885486 systemd[1]: Started sshd@18-10.200.8.41:22-10.200.16.10:32884.service - OpenSSH per-connection server daemon (10.200.16.10:32884). Nov 8 00:28:08.519430 sshd[6354]: Accepted publickey for core from 10.200.16.10 port 32884 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:08.523057 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:08.531342 systemd-logind[1708]: New session 21 of user core. Nov 8 00:28:08.535903 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:28:08.719799 kubelet[3265]: E1108 00:28:08.719741 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:28:09.238897 sshd[6354]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:09.243709 systemd-logind[1708]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:28:09.244774 systemd[1]: sshd@18-10.200.8.41:22-10.200.16.10:32884.service: Deactivated successfully. Nov 8 00:28:09.248553 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:28:09.250408 systemd-logind[1708]: Removed session 21. Nov 8 00:28:09.356010 systemd[1]: Started sshd@19-10.200.8.41:22-10.200.16.10:32888.service - OpenSSH per-connection server daemon (10.200.16.10:32888). Nov 8 00:28:09.990758 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 32888 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:09.992810 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:09.998990 systemd-logind[1708]: New session 22 of user core. Nov 8 00:28:10.005898 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:28:10.525823 sshd[6365]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:10.533092 systemd[1]: sshd@19-10.200.8.41:22-10.200.16.10:32888.service: Deactivated successfully. Nov 8 00:28:10.536421 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:28:10.537940 systemd-logind[1708]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:28:10.539572 systemd-logind[1708]: Removed session 22. Nov 8 00:28:10.720291 kubelet[3265]: E1108 00:28:10.720242 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:28:13.722084 kubelet[3265]: E1108 00:28:13.722028 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:28:13.724063 kubelet[3265]: E1108 00:28:13.723478 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:28:15.652826 systemd[1]: Started sshd@20-10.200.8.41:22-10.200.16.10:39408.service - OpenSSH per-connection server daemon (10.200.16.10:39408). Nov 8 00:28:16.283666 sshd[6380]: Accepted publickey for core from 10.200.16.10 port 39408 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:16.288142 sshd[6380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:16.295649 systemd-logind[1708]: New session 23 of user core. Nov 8 00:28:16.302287 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:28:16.814998 sshd[6380]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:16.819922 systemd[1]: sshd@20-10.200.8.41:22-10.200.16.10:39408.service: Deactivated successfully. Nov 8 00:28:16.824218 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:28:16.826860 systemd-logind[1708]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:28:16.828064 systemd-logind[1708]: Removed session 23. Nov 8 00:28:17.725516 kubelet[3265]: E1108 00:28:17.723951 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:28:19.721157 kubelet[3265]: E1108 00:28:19.720670 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:28:19.723519 kubelet[3265]: E1108 00:28:19.722398 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:28:21.723024 kubelet[3265]: E1108 00:28:21.722946 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:28:22.054040 systemd[1]: Started sshd@21-10.200.8.41:22-10.200.16.10:56148.service - OpenSSH per-connection server daemon (10.200.16.10:56148). Nov 8 00:28:22.688966 sshd[6395]: Accepted publickey for core from 10.200.16.10 port 56148 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:22.692444 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:22.702185 systemd-logind[1708]: New session 24 of user core. Nov 8 00:28:22.707917 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:28:23.253167 sshd[6395]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:23.258868 systemd-logind[1708]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:28:23.260710 systemd[1]: sshd@21-10.200.8.41:22-10.200.16.10:56148.service: Deactivated successfully. Nov 8 00:28:23.264342 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:28:23.266167 systemd-logind[1708]: Removed session 24. Nov 8 00:28:24.720301 kubelet[3265]: E1108 00:28:24.720148 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:28:26.718240 kubelet[3265]: E1108 00:28:26.718165 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:28:26.719200 kubelet[3265]: E1108 00:28:26.719101 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:28:28.368009 systemd[1]: Started sshd@22-10.200.8.41:22-10.200.16.10:56158.service - OpenSSH per-connection server daemon (10.200.16.10:56158). Nov 8 00:28:28.721038 kubelet[3265]: E1108 00:28:28.720850 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62" Nov 8 00:28:28.999272 sshd[6409]: Accepted publickey for core from 10.200.16.10 port 56158 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:29.001199 sshd[6409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:29.006261 systemd-logind[1708]: New session 25 of user core. Nov 8 00:28:29.010282 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:28:29.591998 sshd[6409]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:29.595556 systemd-logind[1708]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:28:29.598519 systemd[1]: sshd@22-10.200.8.41:22-10.200.16.10:56158.service: Deactivated successfully. Nov 8 00:28:29.601663 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:28:29.603838 systemd-logind[1708]: Removed session 25. Nov 8 00:28:31.721073 kubelet[3265]: E1108 00:28:31.721020 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-p994f" podUID="c8f9b682-1403-4984-8b7e-efa798fabe9d" Nov 8 00:28:33.721114 kubelet[3265]: E1108 00:28:33.721063 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-847dc76596-gs868" podUID="f9dd6f03-0839-47a7-a4a1-75b8d5be8ef2" Nov 8 00:28:34.712802 systemd[1]: Started sshd@23-10.200.8.41:22-10.200.16.10:46338.service - OpenSSH per-connection server daemon (10.200.16.10:46338). Nov 8 00:28:34.724632 kubelet[3265]: E1108 00:28:34.724416 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7b877f7c4d-czsrv" podUID="eb4448af-cf19-4ce5-bbad-22d98ef7ab44" Nov 8 00:28:35.354801 sshd[6443]: Accepted publickey for core from 10.200.16.10 port 46338 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:35.357276 sshd[6443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:35.372934 systemd-logind[1708]: New session 26 of user core. Nov 8 00:28:35.381068 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:28:35.905769 sshd[6443]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:35.911318 systemd[1]: sshd@23-10.200.8.41:22-10.200.16.10:46338.service: Deactivated successfully. Nov 8 00:28:35.915660 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:28:35.917035 systemd-logind[1708]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:28:35.918404 systemd-logind[1708]: Removed session 26. Nov 8 00:28:37.722840 kubelet[3265]: E1108 00:28:37.722152 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5ccd97dd97-f9lsk" podUID="415c1089-29e6-4262-b21f-188443e0b159" Nov 8 00:28:38.721577 kubelet[3265]: E1108 00:28:38.721378 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wpv6d" podUID="88161698-8450-46cd-aabf-3650fadd565e" Nov 8 00:28:41.031085 systemd[1]: Started sshd@24-10.200.8.41:22-10.200.16.10:41334.service - OpenSSH per-connection server daemon (10.200.16.10:41334). Nov 8 00:28:41.682330 sshd[6459]: Accepted publickey for core from 10.200.16.10 port 41334 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:41.685442 sshd[6459]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:41.694340 systemd-logind[1708]: New session 27 of user core. Nov 8 00:28:41.697923 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 8 00:28:41.721548 kubelet[3265]: E1108 00:28:41.721368 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9cc5c" podUID="5435b4b0-7a30-4d83-a845-ff6ed8ff1797" Nov 8 00:28:42.224970 sshd[6459]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:42.229269 systemd[1]: sshd@24-10.200.8.41:22-10.200.16.10:41334.service: Deactivated successfully. Nov 8 00:28:42.231120 systemd-logind[1708]: Session 27 logged out. Waiting for processes to exit. Nov 8 00:28:42.233919 systemd[1]: session-27.scope: Deactivated successfully. Nov 8 00:28:42.237833 systemd-logind[1708]: Removed session 27. Nov 8 00:28:43.720439 kubelet[3265]: E1108 00:28:43.720141 3265 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-579774f8c5-5sn5r" podUID="4bf1c931-a10c-42c4-bece-79d61b489c62"