Nov 8 00:23:38.135695 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:38.135722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.135734 kernel: BIOS-provided physical RAM map: Nov 8 00:23:38.135741 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:23:38.135746 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:23:38.135753 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:23:38.135760 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:23:38.135771 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc4fff] reserved Nov 8 00:23:38.135778 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd2fff] usable Nov 8 00:23:38.135785 kernel: BIOS-e820: [mem 0x000000003ffd3000-0x000000003fffafff] ACPI data Nov 8 00:23:38.135795 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:23:38.135801 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:23:38.135808 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:23:38.135818 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:23:38.135833 kernel: NX (Execute Disable) protection: active Nov 8 00:23:38.135844 kernel: APIC: Static calls initialized Nov 8 00:23:38.135853 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:23:38.135860 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f339a98 Nov 8 00:23:38.135871 kernel: SMBIOS 3.1.0 present. Nov 8 00:23:38.135882 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:23:38.135892 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:23:38.135903 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:23:38.135914 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 8 00:23:38.135924 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:23:38.135938 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:23:38.135950 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:23:38.135962 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:38.135976 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:38.135988 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:23:38.136002 kernel: tsc: Detected 2593.907 MHz processor Nov 8 00:23:38.136015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:38.136027 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:38.136041 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:23:38.136057 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:23:38.136071 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:38.136085 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:23:38.136098 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:23:38.136111 kernel: Using GB pages for direct mapping Nov 8 00:23:38.136125 kernel: Secure boot disabled Nov 8 00:23:38.136139 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:38.136159 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:23:38.136175 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136190 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136206 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:23:38.136221 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:23:38.136349 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136364 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136382 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136396 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136409 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136423 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136436 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:23:38.136450 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Nov 8 00:23:38.136463 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:23:38.136476 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:23:38.136490 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:23:38.136506 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:23:38.136520 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:23:38.136533 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:23:38.136546 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:23:38.136558 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:23:38.136572 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:23:38.136584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:23:38.136597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:23:38.136610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:23:38.136626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:23:38.136639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:23:38.136653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:23:38.136667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:23:38.136681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:23:38.136693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:23:38.136706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:23:38.136720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:23:38.136738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:23:38.136752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:23:38.136766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:23:38.136778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:23:38.136791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:23:38.136803 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:23:38.136815 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:23:38.136827 kernel: Zone ranges: Nov 8 00:23:38.136839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:38.136857 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:23:38.136869 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:38.136881 kernel: Movable zone start for each node Nov 8 00:23:38.138263 kernel: Early memory node ranges Nov 8 00:23:38.138290 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:23:38.138305 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:23:38.138320 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd2fff] Nov 8 00:23:38.138334 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:23:38.138349 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:38.138369 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:23:38.138384 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:38.138399 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:23:38.138413 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Nov 8 00:23:38.138427 kernel: On node 0, zone DMA32: 44 pages in unavailable ranges Nov 8 00:23:38.138441 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:23:38.138456 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:23:38.138470 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:38.138485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:38.138503 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:38.138518 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:23:38.138532 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:23:38.138547 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:23:38.138561 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:23:38.138576 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:38.138590 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:23:38.138605 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:23:38.138619 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:23:38.138637 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:23:38.138651 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:23:38.138665 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:23:38.138681 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.138696 kernel: random: crng init done Nov 8 00:23:38.138710 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:23:38.138725 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:23:38.138739 kernel: Fallback order for Node 0: 0 Nov 8 00:23:38.138757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062376 Nov 8 00:23:38.138782 kernel: Policy zone: Normal Nov 8 00:23:38.138797 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:38.138815 kernel: software IO TLB: area num 2. Nov 8 00:23:38.138830 kernel: Memory: 8074604K/8387516K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 312652K reserved, 0K cma-reserved) Nov 8 00:23:38.138846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:23:38.138861 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:38.138876 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:38.138892 kernel: Dynamic Preempt: voluntary Nov 8 00:23:38.138907 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:38.138931 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:38.138947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:23:38.138963 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:38.138978 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:38.138993 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:38.139008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:38.139024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:23:38.139043 kernel: Using NULL legacy PIC Nov 8 00:23:38.139059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:23:38.139073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:38.139089 kernel: Console: colour dummy device 80x25 Nov 8 00:23:38.139104 kernel: printk: console [tty1] enabled Nov 8 00:23:38.139119 kernel: printk: console [ttyS0] enabled Nov 8 00:23:38.139133 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:23:38.139148 kernel: ACPI: Core revision 20230628 Nov 8 00:23:38.139163 kernel: Failed to register legacy timer interrupt Nov 8 00:23:38.139181 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:38.139197 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:23:38.139211 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:23:38.139225 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:23:38.139254 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:23:38.139267 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:23:38.139279 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:23:38.139294 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:23:38.139309 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:23:38.139327 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Nov 8 00:23:38.139339 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:23:38.139352 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:23:38.139361 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:38.139370 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:38.139382 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:38.139390 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:23:38.139402 kernel: RETBleed: Vulnerable Nov 8 00:23:38.139410 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:23:38.139418 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:38.139432 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:38.139441 kernel: active return thunk: its_return_thunk Nov 8 00:23:38.139449 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:23:38.139457 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:38.139468 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:38.139477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:38.139487 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:23:38.139496 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:23:38.139507 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:23:38.139517 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:38.139526 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:23:38.139537 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:23:38.139548 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:23:38.139556 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:23:38.139564 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:38.139572 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:38.139583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:38.139591 kernel: landlock: Up and running. Nov 8 00:23:38.139599 kernel: SELinux: Initializing. Nov 8 00:23:38.139611 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.139620 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.139628 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:23:38.139641 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139650 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139658 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139668 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:23:38.139678 kernel: signal: max sigframe size: 3632 Nov 8 00:23:38.139686 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:38.139696 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:38.139706 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:23:38.139714 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:38.139726 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:38.139735 kernel: .... node #0, CPUs: #1 Nov 8 00:23:38.139744 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:23:38.139756 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:23:38.139765 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:23:38.139773 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:38.139784 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 8 00:23:38.139793 kernel: devtmpfs: initialized Nov 8 00:23:38.139802 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:38.139812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:23:38.139820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:38.139832 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:23:38.139840 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:38.139848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:38.139856 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:38.139865 kernel: audit: type=2000 audit(1762561416.029:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:38.139873 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:38.139883 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:38.139891 kernel: cpuidle: using governor menu Nov 8 00:23:38.139902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:38.139911 kernel: dca service started, version 1.12.1 Nov 8 00:23:38.139919 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:23:38.139927 kernel: e820: reserve RAM buffer [mem 0x3ffd3000-0x3fffffff] Nov 8 00:23:38.139935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:38.139947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:38.139955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:38.139968 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:38.139978 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:38.139986 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:38.139994 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:38.140002 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:38.140014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:38.140022 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:38.140030 kernel: ACPI: Interpreter enabled Nov 8 00:23:38.140038 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:23:38.140051 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:38.140060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:38.140068 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:23:38.140079 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:23:38.140088 kernel: iommu: Default domain type: Translated Nov 8 00:23:38.140096 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:38.140104 kernel: efivars: Registered efivars operations Nov 8 00:23:38.140112 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:38.140123 kernel: PCI: System does not support PCI Nov 8 00:23:38.140131 kernel: vgaarb: loaded Nov 8 00:23:38.140142 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:23:38.140152 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:38.140161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:38.140169 kernel: pnp: PnP ACPI init Nov 8 00:23:38.140177 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:23:38.140189 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:38.140197 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:38.140205 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:23:38.140214 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:23:38.140224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:38.140245 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:23:38.140253 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:23:38.140261 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:23:38.140273 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.140281 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.140290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:38.140298 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:38.140309 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:38.140317 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:23:38.140325 kernel: software IO TLB: mapped [mem 0x000000003b339000-0x000000003f339000] (64MB) Nov 8 00:23:38.140337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:23:38.140345 kernel: Initialise system trusted keyrings Nov 8 00:23:38.140353 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:23:38.140361 kernel: Key type asymmetric registered Nov 8 00:23:38.140369 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:38.140377 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:38.140388 kernel: io scheduler mq-deadline registered Nov 8 00:23:38.140399 kernel: io scheduler kyber registered Nov 8 00:23:38.140407 kernel: io scheduler bfq registered Nov 8 00:23:38.140415 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:38.140423 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:38.140431 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:38.140440 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:23:38.140451 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:23:38.140589 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:23:38.140687 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:23:37 UTC (1762561417) Nov 8 00:23:38.140774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:23:38.140785 kernel: intel_pstate: CPU model not supported Nov 8 00:23:38.140798 kernel: efifb: probing for efifb Nov 8 00:23:38.140806 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:23:38.140815 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:23:38.140823 kernel: efifb: scrolling: redraw Nov 8 00:23:38.140833 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:23:38.140846 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:38.140854 kernel: fb0: EFI VGA frame buffer device Nov 8 00:23:38.140862 kernel: pstore: Using crash dump compression: deflate Nov 8 00:23:38.140870 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:23:38.140878 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:38.140886 kernel: Segment Routing with IPv6 Nov 8 00:23:38.140898 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:38.140906 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:38.140914 kernel: Key type dns_resolver registered Nov 8 00:23:38.140928 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:38.140937 kernel: sched_clock: Marking stable (1001003200, 55339500)->(1313575200, -257232500) Nov 8 00:23:38.140945 kernel: registered taskstats version 1 Nov 8 00:23:38.140956 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:38.140965 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:38.140973 kernel: Key type .fscrypt registered Nov 8 00:23:38.140981 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:38.140989 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:38.141000 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:38.141011 kernel: ima: No architecture policies found Nov 8 00:23:38.141019 kernel: clk: Disabling unused clocks Nov 8 00:23:38.141031 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:38.141040 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:38.141048 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:38.141060 kernel: Run /init as init process Nov 8 00:23:38.141068 kernel: with arguments: Nov 8 00:23:38.141076 kernel: /init Nov 8 00:23:38.141088 kernel: with environment: Nov 8 00:23:38.141098 kernel: HOME=/ Nov 8 00:23:38.141106 kernel: TERM=linux Nov 8 00:23:38.141118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:38.141129 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:38.141138 systemd[1]: Detected architecture x86-64. Nov 8 00:23:38.141150 systemd[1]: Running in initrd. Nov 8 00:23:38.141158 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:38.141167 systemd[1]: Hostname set to . Nov 8 00:23:38.141178 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:38.141186 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:38.141195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:38.141203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:38.141212 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:38.141225 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:38.141244 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:38.141256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:38.141268 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:38.141277 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:38.141286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:38.141298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:38.141307 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:38.141316 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:38.141324 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:38.141339 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:38.141347 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:38.141359 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:38.141369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:38.141377 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:38.141390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:38.141398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:38.141407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:38.141422 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:38.141430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:38.141439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:38.141451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:38.141459 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:38.141468 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:38.141476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:38.141505 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:23:38.141532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:38.141542 systemd-journald[176]: Journal started Nov 8 00:23:38.141564 systemd-journald[176]: Runtime Journal (/run/log/journal/9ccc59a53ecf4b4eb972e2efd3e551a1) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:38.161796 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:38.164217 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:23:38.171348 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:38.183514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:38.194034 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:38.209626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:38.199981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:38.217255 kernel: Bridge firewalling registered Nov 8 00:23:38.217218 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:23:38.223444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:38.230369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:38.239392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:38.244543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:38.257899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:38.271552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:38.278370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:38.290525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:38.294897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:38.304535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:38.310323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:38.324451 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:38.332432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:38.345988 dracut-cmdline[213]: dracut-dracut-053 Nov 8 00:23:38.353256 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.405542 systemd-resolved[216]: Positive Trust Anchors: Nov 8 00:23:38.405556 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:38.405612 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:38.437047 systemd-resolved[216]: Defaulting to hostname 'linux'. Nov 8 00:23:38.449702 kernel: SCSI subsystem initialized Nov 8 00:23:38.441629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:38.448749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:38.462249 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:38.474294 kernel: iscsi: registered transport (tcp) Nov 8 00:23:38.497428 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:38.497517 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:38.534355 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:38.544475 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:38.579466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:38.579561 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:38.583496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:38.625261 kernel: raid6: avx512x4 gen() 18055 MB/s Nov 8 00:23:38.644252 kernel: raid6: avx512x2 gen() 18117 MB/s Nov 8 00:23:38.663239 kernel: raid6: avx512x1 gen() 18175 MB/s Nov 8 00:23:38.683246 kernel: raid6: avx2x4 gen() 17871 MB/s Nov 8 00:23:38.702247 kernel: raid6: avx2x2 gen() 18033 MB/s Nov 8 00:23:38.722951 kernel: raid6: avx2x1 gen() 13454 MB/s Nov 8 00:23:38.722996 kernel: raid6: using algorithm avx512x1 gen() 18175 MB/s Nov 8 00:23:38.746379 kernel: raid6: .... xor() 26587 MB/s, rmw enabled Nov 8 00:23:38.746415 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:23:38.769254 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:38.923260 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:38.933257 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:38.945522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:38.958829 systemd-udevd[399]: Using default interface naming scheme 'v255'. Nov 8 00:23:38.963309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:38.977405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:38.990595 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 8 00:23:39.020172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:39.033450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:39.077432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:39.095480 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:39.136765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:39.144408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:39.152178 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:39.156714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:39.172536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:39.188052 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:39.188102 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:23:39.190637 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:39.191929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:39.201677 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:39.209296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:39.209498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.215362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.238944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.259149 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:23:39.259175 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:23:39.262279 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:39.285687 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:23:39.285735 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:39.288245 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:39.297446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:39.313514 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:23:39.297591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.323260 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:23:39.331699 kernel: PTP clock support registered Nov 8 00:23:39.327657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.348257 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:23:39.353959 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:23:39.353999 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:23:39.357699 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:23:39.359256 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:23:39.359294 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:23:39.359314 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:23:39.587523 systemd-resolved[216]: Clock change detected. Flushing caches. Nov 8 00:23:39.618110 kernel: scsi host1: storvsc_host_t Nov 8 00:23:39.618361 kernel: scsi host0: storvsc_host_t Nov 8 00:23:39.618521 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:39.618692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:39.625746 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:23:39.637741 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:23:39.637784 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:23:39.645095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.660827 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:23:39.661108 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:39.662739 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:23:39.663160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:39.683754 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:23:39.684081 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:23:39.688502 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:23:39.688751 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:23:39.696701 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:23:39.695542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:39.714271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:39.714317 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:23:39.714529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#94 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:23:39.742780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:23:39.758298 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: VF slot 1 added Nov 8 00:23:39.769239 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:23:39.776787 kernel: hv_pci 2e871faa-0add-4e8d-8cd0-1fdecd8375ce: PCI VMBus probing: Using version 0x10004 Nov 8 00:23:39.777008 kernel: hv_pci 2e871faa-0add-4e8d-8cd0-1fdecd8375ce: PCI host bridge to bus 0add:00 Nov 8 00:23:39.783575 kernel: pci_bus 0add:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:23:39.787215 kernel: pci_bus 0add:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:23:39.793962 kernel: pci 0add:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:23:39.799796 kernel: pci 0add:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:39.803776 kernel: pci 0add:00:02.0: enabling Extended Tags Nov 8 00:23:39.816798 kernel: pci 0add:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0add:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:23:39.824440 kernel: pci_bus 0add:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:23:39.824790 kernel: pci 0add:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:39.999767 kernel: mlx5_core 0add:00:02.0: enabling device (0000 -> 0002) Nov 8 00:23:40.004748 kernel: mlx5_core 0add:00:02.0: firmware version: 14.30.5006 Nov 8 00:23:40.198587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:23:40.223752 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Nov 8 00:23:40.230871 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: VF registering: eth1 Nov 8 00:23:40.231090 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (460) Nov 8 00:23:40.240310 kernel: mlx5_core 0add:00:02.0 eth1: joined to eth0 Nov 8 00:23:40.249172 kernel: mlx5_core 0add:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:23:40.263755 kernel: mlx5_core 0add:00:02.0 enP2781s1: renamed from eth1 Nov 8 00:23:40.275110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:40.290440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:23:40.294814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:23:40.308281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:23:40.323888 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:40.341778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:40.348744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:41.364289 disk-uuid[603]: The operation has completed successfully. Nov 8 00:23:41.367635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:41.445977 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:41.446101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:41.477914 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:41.486128 sh[716]: Success Nov 8 00:23:41.519771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:23:41.777583 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:41.790863 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:41.797055 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:41.826347 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:41.826434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:41.830613 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:41.837456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:41.840353 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:42.211799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:42.213750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:42.224132 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:42.230128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:42.263353 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.263432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:42.266641 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:42.304161 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:42.319918 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.319408 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:42.327416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:42.341889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:42.348641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:42.359915 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:42.378932 systemd-networkd[898]: lo: Link UP Nov 8 00:23:42.378939 systemd-networkd[898]: lo: Gained carrier Nov 8 00:23:42.384879 systemd-networkd[898]: Enumeration completed Nov 8 00:23:42.386609 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:42.387077 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:42.387081 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:42.393059 systemd[1]: Reached target network.target - Network. Nov 8 00:23:42.445752 kernel: mlx5_core 0add:00:02.0 enP2781s1: Link up Nov 8 00:23:42.478768 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: Data path switched to VF: enP2781s1 Nov 8 00:23:42.479603 systemd-networkd[898]: enP2781s1: Link UP Nov 8 00:23:42.479750 systemd-networkd[898]: eth0: Link UP Nov 8 00:23:42.479915 systemd-networkd[898]: eth0: Gained carrier Nov 8 00:23:42.479928 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:42.493197 systemd-networkd[898]: enP2781s1: Gained carrier Nov 8 00:23:42.520783 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:43.351941 ignition[900]: Ignition 2.19.0 Nov 8 00:23:43.351955 ignition[900]: Stage: fetch-offline Nov 8 00:23:43.352007 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.352018 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.352144 ignition[900]: parsed url from cmdline: "" Nov 8 00:23:43.352150 ignition[900]: no config URL provided Nov 8 00:23:43.352157 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.352169 ignition[900]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.352176 ignition[900]: failed to fetch config: resource requires networking Nov 8 00:23:43.352489 ignition[900]: Ignition finished successfully Nov 8 00:23:43.371954 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:43.389942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:23:43.405831 ignition[909]: Ignition 2.19.0 Nov 8 00:23:43.405844 ignition[909]: Stage: fetch Nov 8 00:23:43.406077 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.406091 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.406210 ignition[909]: parsed url from cmdline: "" Nov 8 00:23:43.406214 ignition[909]: no config URL provided Nov 8 00:23:43.406219 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.406227 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.406248 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:23:43.530655 ignition[909]: GET result: OK Nov 8 00:23:43.530770 ignition[909]: config has been read from IMDS userdata Nov 8 00:23:43.530809 ignition[909]: parsing config with SHA512: 9c91a669a439a898a8193a470c9864f216ca995124385422287c30ad53438902622c89fb770753d8c741886a4e9e49d88d39ab24fef053ea7b4646b287a86c58 Nov 8 00:23:43.536750 unknown[909]: fetched base config from "system" Nov 8 00:23:43.536772 unknown[909]: fetched base config from "system" Nov 8 00:23:43.537693 ignition[909]: fetch: fetch complete Nov 8 00:23:43.536781 unknown[909]: fetched user config from "azure" Nov 8 00:23:43.537711 ignition[909]: fetch: fetch passed Nov 8 00:23:43.539313 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:23:43.537767 ignition[909]: Ignition finished successfully Nov 8 00:23:43.552285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:43.568422 ignition[915]: Ignition 2.19.0 Nov 8 00:23:43.568434 ignition[915]: Stage: kargs Nov 8 00:23:43.571389 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:43.568674 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.568688 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.569506 ignition[915]: kargs: kargs passed Nov 8 00:23:43.569551 ignition[915]: Ignition finished successfully Nov 8 00:23:43.590296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:43.606987 ignition[921]: Ignition 2.19.0 Nov 8 00:23:43.606999 ignition[921]: Stage: disks Nov 8 00:23:43.607231 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.609655 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:43.607245 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.616342 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:43.608123 ignition[921]: disks: disks passed Nov 8 00:23:43.608170 ignition[921]: Ignition finished successfully Nov 8 00:23:43.634045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:43.638068 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:43.641256 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:43.644531 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:43.664878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:43.729742 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:23:43.735634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:43.749880 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:43.850742 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:43.850911 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:43.852832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:43.887837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:43.906750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:23:43.916249 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:43.916327 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:43.916495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:43.921531 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:43.929231 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:23:43.936969 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:43.937103 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:43.937146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:43.945203 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:43.952970 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:43.966945 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:44.118019 systemd-networkd[898]: eth0: Gained IPv6LL Nov 8 00:23:44.564631 coreos-metadata[957]: Nov 08 00:23:44.564 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:44.571735 coreos-metadata[957]: Nov 08 00:23:44.571 INFO Fetch successful Nov 8 00:23:44.575169 coreos-metadata[957]: Nov 08 00:23:44.571 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:44.585019 coreos-metadata[957]: Nov 08 00:23:44.584 INFO Fetch successful Nov 8 00:23:44.600207 coreos-metadata[957]: Nov 08 00:23:44.600 INFO wrote hostname ci-4081.3.6-n-2742f1d4ae to /sysroot/etc/hostname Nov 8 00:23:44.606430 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:44.608637 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:44.664371 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:44.673680 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:44.681000 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:45.613457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:45.624812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:45.636223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:45.643450 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:45.644269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:45.671492 ignition[1058]: INFO : Ignition 2.19.0 Nov 8 00:23:45.675117 ignition[1058]: INFO : Stage: mount Nov 8 00:23:45.675117 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:45.675117 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:45.675117 ignition[1058]: INFO : mount: mount passed Nov 8 00:23:45.675117 ignition[1058]: INFO : Ignition finished successfully Nov 8 00:23:45.677085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:45.693278 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:45.703357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:45.718961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:45.736738 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Nov 8 00:23:45.743959 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:45.744019 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:45.744736 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:45.755091 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:45.756567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:45.781579 ignition[1087]: INFO : Ignition 2.19.0 Nov 8 00:23:45.781579 ignition[1087]: INFO : Stage: files Nov 8 00:23:45.787101 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:45.787101 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:45.787101 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:45.799413 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:45.799413 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:45.901384 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:45.905775 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:45.905775 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:45.901822 unknown[1087]: wrote ssh authorized keys file for user: core Nov 8 00:23:45.919758 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:45.925863 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:45.967546 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:23:46.003848 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:23:46.307368 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:23:46.633471 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.633471 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:23:46.643933 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.649686 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.649686 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:23:46.659055 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.663252 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.667917 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.673219 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.673219 ignition[1087]: INFO : files: files passed Nov 8 00:23:46.673219 ignition[1087]: INFO : Ignition finished successfully Nov 8 00:23:46.673999 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:46.689965 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:46.699879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:46.705065 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:46.705169 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:46.723462 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.723462 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.737938 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.727444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.733248 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.760995 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:46.784760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:46.784884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:46.792416 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:46.799419 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:46.800526 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:46.813912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:46.830606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.839965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:46.853538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:46.854897 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:46.855363 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:46.856475 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:46.856612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.857517 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:46.858110 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:46.858633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.859190 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:46.859715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:46.860224 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:46.860790 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:46.861368 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:46.861878 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:46.862366 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:46.862968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:46.863100 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:46.863993 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:46.993151 ignition[1139]: INFO : Ignition 2.19.0 Nov 8 00:23:46.993151 ignition[1139]: INFO : Stage: umount Nov 8 00:23:46.993151 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:46.993151 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:46.993151 ignition[1139]: INFO : umount: umount passed Nov 8 00:23:46.993151 ignition[1139]: INFO : Ignition finished successfully Nov 8 00:23:46.864678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:46.865146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:46.913279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:46.917487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:46.917626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:46.923556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:46.923670 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.924066 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:46.924157 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:46.924584 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:23:46.924671 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:46.972905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:46.976028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:46.976273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:47.002716 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:47.009963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:47.010160 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:47.018053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:47.018161 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:47.031114 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:47.031210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:47.038120 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:47.038229 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:47.046606 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:47.046688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:47.059920 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:47.059988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:47.066613 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:23:47.066679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:23:47.073106 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:47.083189 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:47.083267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:47.089864 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:47.095792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:47.099346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:47.103130 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:47.106065 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:47.107314 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:47.107359 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:47.110420 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:47.110460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:47.111057 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:47.111099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:47.111705 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:47.112213 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:47.216284 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:47.219374 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:47.226296 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:47.232738 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:47.232810 systemd-networkd[898]: eth0: DHCPv6 lease lost Nov 8 00:23:47.232863 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:47.241529 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:47.241657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:47.245868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:47.245938 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:47.270856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:47.273541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:47.273615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:47.277498 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:47.277544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:47.286661 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:47.286708 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:47.293810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:47.293866 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:47.300774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:47.330220 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:47.330390 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:47.335343 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:47.335423 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:47.340263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:47.340311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:47.343528 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:47.343574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:47.351126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:47.351173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:47.358005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:47.358054 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:47.362402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:47.388147 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: Data path switched from VF: enP2781s1 Nov 8 00:23:47.396681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:47.396907 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:47.403866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:47.403918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:47.411676 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:47.411796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:47.431638 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:47.437259 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:47.565350 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:47.565487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:47.572045 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:47.581457 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:47.581545 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:47.595949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:47.620952 systemd[1]: Switching root. Nov 8 00:23:47.693337 systemd-journald[176]: Journal stopped Nov 8 00:23:38.135695 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Nov 7 22:45:04 -00 2025 Nov 8 00:23:38.135722 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.135734 kernel: BIOS-provided physical RAM map: Nov 8 00:23:38.135741 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 8 00:23:38.135746 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 8 00:23:38.135753 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 8 00:23:38.135760 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 8 00:23:38.135771 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc4fff] reserved Nov 8 00:23:38.135778 kernel: BIOS-e820: [mem 0x000000003ffc5000-0x000000003ffd2fff] usable Nov 8 00:23:38.135785 kernel: BIOS-e820: [mem 0x000000003ffd3000-0x000000003fffafff] ACPI data Nov 8 00:23:38.135795 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 8 00:23:38.135801 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 8 00:23:38.135808 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 8 00:23:38.135818 kernel: printk: bootconsole [earlyser0] enabled Nov 8 00:23:38.135833 kernel: NX (Execute Disable) protection: active Nov 8 00:23:38.135844 kernel: APIC: Static calls initialized Nov 8 00:23:38.135853 kernel: efi: EFI v2.7 by Microsoft Nov 8 00:23:38.135860 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f339a98 Nov 8 00:23:38.135871 kernel: SMBIOS 3.1.0 present. Nov 8 00:23:38.135882 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Nov 8 00:23:38.135892 kernel: Hypervisor detected: Microsoft Hyper-V Nov 8 00:23:38.135903 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 8 00:23:38.135914 kernel: Hyper-V: Host Build 10.0.26100.1414-1-0 Nov 8 00:23:38.135924 kernel: Hyper-V: Nested features: 0x1e0101 Nov 8 00:23:38.135938 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 8 00:23:38.135950 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 8 00:23:38.135962 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:38.135976 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 8 00:23:38.135988 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 8 00:23:38.136002 kernel: tsc: Detected 2593.907 MHz processor Nov 8 00:23:38.136015 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 8 00:23:38.136027 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 8 00:23:38.136041 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 8 00:23:38.136057 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 8 00:23:38.136071 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 8 00:23:38.136085 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 8 00:23:38.136098 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 8 00:23:38.136111 kernel: Using GB pages for direct mapping Nov 8 00:23:38.136125 kernel: Secure boot disabled Nov 8 00:23:38.136139 kernel: ACPI: Early table checksum verification disabled Nov 8 00:23:38.136159 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 8 00:23:38.136175 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136190 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136206 kernel: ACPI: DSDT 0x000000003FFD6000 01E11C (v02 MSFTVM DSDT01 00000001 INTL 20230628) Nov 8 00:23:38.136221 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 8 00:23:38.136349 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136364 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136382 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136396 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136409 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136423 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 8 00:23:38.136436 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 8 00:23:38.136450 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff411b] Nov 8 00:23:38.136463 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 8 00:23:38.136476 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 8 00:23:38.136490 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 8 00:23:38.136506 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 8 00:23:38.136520 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 8 00:23:38.136533 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 8 00:23:38.136546 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 8 00:23:38.136558 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 8 00:23:38.136572 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 8 00:23:38.136584 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 8 00:23:38.136597 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 8 00:23:38.136610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 8 00:23:38.136626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 8 00:23:38.136639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 8 00:23:38.136653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 8 00:23:38.136667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 8 00:23:38.136681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 8 00:23:38.136693 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 8 00:23:38.136706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 8 00:23:38.136720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 8 00:23:38.136738 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 8 00:23:38.136752 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 8 00:23:38.136766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 8 00:23:38.136778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 8 00:23:38.136791 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 8 00:23:38.136803 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 8 00:23:38.136815 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 8 00:23:38.136827 kernel: Zone ranges: Nov 8 00:23:38.136839 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 8 00:23:38.136857 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 8 00:23:38.136869 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:38.136881 kernel: Movable zone start for each node Nov 8 00:23:38.138263 kernel: Early memory node ranges Nov 8 00:23:38.138290 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 8 00:23:38.138305 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 8 00:23:38.138320 kernel: node 0: [mem 0x000000003ffc5000-0x000000003ffd2fff] Nov 8 00:23:38.138334 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 8 00:23:38.138349 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 8 00:23:38.138369 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 8 00:23:38.138384 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 8 00:23:38.138399 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 8 00:23:38.138413 kernel: On node 0, zone DMA32: 132 pages in unavailable ranges Nov 8 00:23:38.138427 kernel: On node 0, zone DMA32: 44 pages in unavailable ranges Nov 8 00:23:38.138441 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 8 00:23:38.138456 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 8 00:23:38.138470 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 8 00:23:38.138485 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 8 00:23:38.138503 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 8 00:23:38.138518 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 8 00:23:38.138532 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 8 00:23:38.138547 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 8 00:23:38.138561 kernel: Booting paravirtualized kernel on Hyper-V Nov 8 00:23:38.138576 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 8 00:23:38.138590 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 8 00:23:38.138605 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 8 00:23:38.138619 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 8 00:23:38.138637 kernel: pcpu-alloc: [0] 0 1 Nov 8 00:23:38.138651 kernel: Hyper-V: PV spinlocks enabled Nov 8 00:23:38.138665 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 8 00:23:38.138681 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.138696 kernel: random: crng init done Nov 8 00:23:38.138710 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 8 00:23:38.138725 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:23:38.138739 kernel: Fallback order for Node 0: 0 Nov 8 00:23:38.138757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062376 Nov 8 00:23:38.138782 kernel: Policy zone: Normal Nov 8 00:23:38.138797 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:23:38.138815 kernel: software IO TLB: area num 2. Nov 8 00:23:38.138830 kernel: Memory: 8074604K/8387516K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42880K init, 2320K bss, 312652K reserved, 0K cma-reserved) Nov 8 00:23:38.138846 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 8 00:23:38.138861 kernel: ftrace: allocating 37980 entries in 149 pages Nov 8 00:23:38.138876 kernel: ftrace: allocated 149 pages with 4 groups Nov 8 00:23:38.138892 kernel: Dynamic Preempt: voluntary Nov 8 00:23:38.138907 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:23:38.138931 kernel: rcu: RCU event tracing is enabled. Nov 8 00:23:38.138947 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 8 00:23:38.138963 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:23:38.138978 kernel: Rude variant of Tasks RCU enabled. Nov 8 00:23:38.138993 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:23:38.139008 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:23:38.139024 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 8 00:23:38.139043 kernel: Using NULL legacy PIC Nov 8 00:23:38.139059 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 8 00:23:38.139073 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:23:38.139089 kernel: Console: colour dummy device 80x25 Nov 8 00:23:38.139104 kernel: printk: console [tty1] enabled Nov 8 00:23:38.139119 kernel: printk: console [ttyS0] enabled Nov 8 00:23:38.139133 kernel: printk: bootconsole [earlyser0] disabled Nov 8 00:23:38.139148 kernel: ACPI: Core revision 20230628 Nov 8 00:23:38.139163 kernel: Failed to register legacy timer interrupt Nov 8 00:23:38.139181 kernel: APIC: Switch to symmetric I/O mode setup Nov 8 00:23:38.139197 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 8 00:23:38.139211 kernel: Hyper-V: Using IPI hypercalls Nov 8 00:23:38.139225 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 8 00:23:38.139254 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 8 00:23:38.139267 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 8 00:23:38.139279 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 8 00:23:38.139294 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 8 00:23:38.139309 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 8 00:23:38.139327 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Nov 8 00:23:38.139339 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 8 00:23:38.139352 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 8 00:23:38.139361 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 8 00:23:38.139370 kernel: Spectre V2 : Mitigation: Retpolines Nov 8 00:23:38.139382 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 8 00:23:38.139390 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 8 00:23:38.139402 kernel: RETBleed: Vulnerable Nov 8 00:23:38.139410 kernel: Speculative Store Bypass: Vulnerable Nov 8 00:23:38.139418 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:38.139432 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 8 00:23:38.139441 kernel: active return thunk: its_return_thunk Nov 8 00:23:38.139449 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 8 00:23:38.139457 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 8 00:23:38.139468 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 8 00:23:38.139477 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 8 00:23:38.139487 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 8 00:23:38.139496 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 8 00:23:38.139507 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 8 00:23:38.139517 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 8 00:23:38.139526 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 8 00:23:38.139537 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 8 00:23:38.139548 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 8 00:23:38.139556 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 8 00:23:38.139564 kernel: Freeing SMP alternatives memory: 32K Nov 8 00:23:38.139572 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:23:38.139583 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 8 00:23:38.139591 kernel: landlock: Up and running. Nov 8 00:23:38.139599 kernel: SELinux: Initializing. Nov 8 00:23:38.139611 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.139620 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.139628 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 8 00:23:38.139641 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139650 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139658 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 8 00:23:38.139668 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 8 00:23:38.139678 kernel: signal: max sigframe size: 3632 Nov 8 00:23:38.139686 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:23:38.139696 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:23:38.139706 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 8 00:23:38.139714 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:23:38.139726 kernel: smpboot: x86: Booting SMP configuration: Nov 8 00:23:38.139735 kernel: .... node #0, CPUs: #1 Nov 8 00:23:38.139744 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 8 00:23:38.139756 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 8 00:23:38.139765 kernel: smp: Brought up 1 node, 2 CPUs Nov 8 00:23:38.139773 kernel: smpboot: Max logical packages: 1 Nov 8 00:23:38.139784 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 8 00:23:38.139793 kernel: devtmpfs: initialized Nov 8 00:23:38.139802 kernel: x86/mm: Memory block size: 128MB Nov 8 00:23:38.139812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 8 00:23:38.139820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:23:38.139832 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 8 00:23:38.139840 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:23:38.139848 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:23:38.139856 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:23:38.139865 kernel: audit: type=2000 audit(1762561416.029:1): state=initialized audit_enabled=0 res=1 Nov 8 00:23:38.139873 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:23:38.139883 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 8 00:23:38.139891 kernel: cpuidle: using governor menu Nov 8 00:23:38.139902 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:23:38.139911 kernel: dca service started, version 1.12.1 Nov 8 00:23:38.139919 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 8 00:23:38.139927 kernel: e820: reserve RAM buffer [mem 0x3ffd3000-0x3fffffff] Nov 8 00:23:38.139935 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 8 00:23:38.139947 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:23:38.139955 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:23:38.139968 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:23:38.139978 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:23:38.139986 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:23:38.139994 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:23:38.140002 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:23:38.140014 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:23:38.140022 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 8 00:23:38.140030 kernel: ACPI: Interpreter enabled Nov 8 00:23:38.140038 kernel: ACPI: PM: (supports S0 S5) Nov 8 00:23:38.140051 kernel: ACPI: Using IOAPIC for interrupt routing Nov 8 00:23:38.140060 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 8 00:23:38.140068 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 8 00:23:38.140079 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 8 00:23:38.140088 kernel: iommu: Default domain type: Translated Nov 8 00:23:38.140096 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 8 00:23:38.140104 kernel: efivars: Registered efivars operations Nov 8 00:23:38.140112 kernel: PCI: Using ACPI for IRQ routing Nov 8 00:23:38.140123 kernel: PCI: System does not support PCI Nov 8 00:23:38.140131 kernel: vgaarb: loaded Nov 8 00:23:38.140142 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 8 00:23:38.140152 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:23:38.140161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:23:38.140169 kernel: pnp: PnP ACPI init Nov 8 00:23:38.140177 kernel: pnp: PnP ACPI: found 3 devices Nov 8 00:23:38.140189 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 8 00:23:38.140197 kernel: NET: Registered PF_INET protocol family Nov 8 00:23:38.140205 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 8 00:23:38.140214 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 8 00:23:38.140224 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:23:38.140245 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:23:38.140253 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 8 00:23:38.140261 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 8 00:23:38.140273 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.140281 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 8 00:23:38.140290 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:23:38.140298 kernel: NET: Registered PF_XDP protocol family Nov 8 00:23:38.140309 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:23:38.140317 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 8 00:23:38.140325 kernel: software IO TLB: mapped [mem 0x000000003b339000-0x000000003f339000] (64MB) Nov 8 00:23:38.140337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 8 00:23:38.140345 kernel: Initialise system trusted keyrings Nov 8 00:23:38.140353 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 8 00:23:38.140361 kernel: Key type asymmetric registered Nov 8 00:23:38.140369 kernel: Asymmetric key parser 'x509' registered Nov 8 00:23:38.140377 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 8 00:23:38.140388 kernel: io scheduler mq-deadline registered Nov 8 00:23:38.140399 kernel: io scheduler kyber registered Nov 8 00:23:38.140407 kernel: io scheduler bfq registered Nov 8 00:23:38.140415 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 8 00:23:38.140423 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:23:38.140431 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 8 00:23:38.140440 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 8 00:23:38.140451 kernel: i8042: PNP: No PS/2 controller found. Nov 8 00:23:38.140589 kernel: rtc_cmos 00:02: registered as rtc0 Nov 8 00:23:38.140687 kernel: rtc_cmos 00:02: setting system clock to 2025-11-08T00:23:37 UTC (1762561417) Nov 8 00:23:38.140774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 8 00:23:38.140785 kernel: intel_pstate: CPU model not supported Nov 8 00:23:38.140798 kernel: efifb: probing for efifb Nov 8 00:23:38.140806 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 8 00:23:38.140815 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 8 00:23:38.140823 kernel: efifb: scrolling: redraw Nov 8 00:23:38.140833 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 8 00:23:38.140846 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:38.140854 kernel: fb0: EFI VGA frame buffer device Nov 8 00:23:38.140862 kernel: pstore: Using crash dump compression: deflate Nov 8 00:23:38.140870 kernel: pstore: Registered efi_pstore as persistent store backend Nov 8 00:23:38.140878 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:23:38.140886 kernel: Segment Routing with IPv6 Nov 8 00:23:38.140898 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:23:38.140906 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:23:38.140914 kernel: Key type dns_resolver registered Nov 8 00:23:38.140928 kernel: IPI shorthand broadcast: enabled Nov 8 00:23:38.140937 kernel: sched_clock: Marking stable (1001003200, 55339500)->(1313575200, -257232500) Nov 8 00:23:38.140945 kernel: registered taskstats version 1 Nov 8 00:23:38.140956 kernel: Loading compiled-in X.509 certificates Nov 8 00:23:38.140965 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cf7a35a152685ec84a621291e4ce58c959319dfd' Nov 8 00:23:38.140973 kernel: Key type .fscrypt registered Nov 8 00:23:38.140981 kernel: Key type fscrypt-provisioning registered Nov 8 00:23:38.140989 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:23:38.141000 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:23:38.141011 kernel: ima: No architecture policies found Nov 8 00:23:38.141019 kernel: clk: Disabling unused clocks Nov 8 00:23:38.141031 kernel: Freeing unused kernel image (initmem) memory: 42880K Nov 8 00:23:38.141040 kernel: Write protecting the kernel read-only data: 36864k Nov 8 00:23:38.141048 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 8 00:23:38.141060 kernel: Run /init as init process Nov 8 00:23:38.141068 kernel: with arguments: Nov 8 00:23:38.141076 kernel: /init Nov 8 00:23:38.141088 kernel: with environment: Nov 8 00:23:38.141098 kernel: HOME=/ Nov 8 00:23:38.141106 kernel: TERM=linux Nov 8 00:23:38.141118 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:38.141129 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:38.141138 systemd[1]: Detected architecture x86-64. Nov 8 00:23:38.141150 systemd[1]: Running in initrd. Nov 8 00:23:38.141158 systemd[1]: No hostname configured, using default hostname. Nov 8 00:23:38.141167 systemd[1]: Hostname set to . Nov 8 00:23:38.141178 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:38.141186 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:23:38.141195 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:38.141203 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:38.141212 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:23:38.141225 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:38.141244 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:23:38.141256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:23:38.141268 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 8 00:23:38.141277 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 8 00:23:38.141286 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:38.141298 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:38.141307 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:38.141316 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:38.141324 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:38.141339 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:38.141347 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:38.141359 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:38.141369 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:23:38.141377 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 8 00:23:38.141390 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:38.141398 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:38.141407 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:38.141422 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:38.141430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:23:38.141439 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:38.141451 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:23:38.141459 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:23:38.141468 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:38.141476 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:38.141505 systemd-journald[176]: Collecting audit messages is disabled. Nov 8 00:23:38.141532 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:38.141542 systemd-journald[176]: Journal started Nov 8 00:23:38.141564 systemd-journald[176]: Runtime Journal (/run/log/journal/9ccc59a53ecf4b4eb972e2efd3e551a1) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:38.161796 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:38.164217 systemd-modules-load[177]: Inserted module 'overlay' Nov 8 00:23:38.171348 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:38.183514 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:38.194034 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:23:38.209626 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:23:38.199981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:38.217255 kernel: Bridge firewalling registered Nov 8 00:23:38.217218 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 8 00:23:38.223444 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:38.230369 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:23:38.239392 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:38.244543 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:38.257899 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:23:38.271552 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:38.278370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:38.290525 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:38.294897 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:38.304535 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:38.310323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:38.324451 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:23:38.332432 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:38.345988 dracut-cmdline[213]: dracut-dracut-053 Nov 8 00:23:38.353256 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=480a02cf7f2001774aa495c3e719d4173e968e6839485a7d2b207ef2facca472 Nov 8 00:23:38.405542 systemd-resolved[216]: Positive Trust Anchors: Nov 8 00:23:38.405556 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:38.405612 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:38.437047 systemd-resolved[216]: Defaulting to hostname 'linux'. Nov 8 00:23:38.449702 kernel: SCSI subsystem initialized Nov 8 00:23:38.441629 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:38.448749 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:38.462249 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:23:38.474294 kernel: iscsi: registered transport (tcp) Nov 8 00:23:38.497428 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:23:38.497517 kernel: QLogic iSCSI HBA Driver Nov 8 00:23:38.534355 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:38.544475 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:23:38.579466 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:23:38.579561 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:23:38.583496 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 8 00:23:38.625261 kernel: raid6: avx512x4 gen() 18055 MB/s Nov 8 00:23:38.644252 kernel: raid6: avx512x2 gen() 18117 MB/s Nov 8 00:23:38.663239 kernel: raid6: avx512x1 gen() 18175 MB/s Nov 8 00:23:38.683246 kernel: raid6: avx2x4 gen() 17871 MB/s Nov 8 00:23:38.702247 kernel: raid6: avx2x2 gen() 18033 MB/s Nov 8 00:23:38.722951 kernel: raid6: avx2x1 gen() 13454 MB/s Nov 8 00:23:38.722996 kernel: raid6: using algorithm avx512x1 gen() 18175 MB/s Nov 8 00:23:38.746379 kernel: raid6: .... xor() 26587 MB/s, rmw enabled Nov 8 00:23:38.746415 kernel: raid6: using avx512x2 recovery algorithm Nov 8 00:23:38.769254 kernel: xor: automatically using best checksumming function avx Nov 8 00:23:38.923260 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:23:38.933257 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:38.945522 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:38.958829 systemd-udevd[399]: Using default interface naming scheme 'v255'. Nov 8 00:23:38.963309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:38.977405 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:23:38.990595 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 8 00:23:39.020172 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:39.033450 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:39.077432 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:39.095480 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:23:39.136765 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:39.144408 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:39.152178 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:39.156714 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:39.172536 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:23:39.188052 kernel: cryptd: max_cpu_qlen set to 1000 Nov 8 00:23:39.188102 kernel: hv_vmbus: Vmbus version:5.2 Nov 8 00:23:39.190637 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:39.191929 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:39.201677 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:39.209296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:39.209498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.215362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.238944 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.259149 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 8 00:23:39.259175 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 8 00:23:39.262279 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:39.285687 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 8 00:23:39.285735 kernel: AVX2 version of gcm_enc/dec engaged. Nov 8 00:23:39.288245 kernel: AES CTR mode by8 optimization enabled Nov 8 00:23:39.297446 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:39.313514 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 8 00:23:39.297591 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.323260 kernel: hv_vmbus: registering driver hv_netvsc Nov 8 00:23:39.331699 kernel: PTP clock support registered Nov 8 00:23:39.327657 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:39.348257 kernel: hv_vmbus: registering driver hv_storvsc Nov 8 00:23:39.353959 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:23:39.353999 kernel: hv_utils: Registering HyperV Utility Driver Nov 8 00:23:39.357699 kernel: hv_vmbus: registering driver hv_utils Nov 8 00:23:39.359256 kernel: hv_utils: Heartbeat IC version 3.0 Nov 8 00:23:39.359294 kernel: hv_utils: Shutdown IC version 3.2 Nov 8 00:23:39.359314 kernel: hv_utils: TimeSync IC version 4.0 Nov 8 00:23:39.587523 systemd-resolved[216]: Clock change detected. Flushing caches. Nov 8 00:23:39.618110 kernel: scsi host1: storvsc_host_t Nov 8 00:23:39.618361 kernel: scsi host0: storvsc_host_t Nov 8 00:23:39.618521 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:39.618692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Nov 8 00:23:39.625746 kernel: hv_vmbus: registering driver hid_hyperv Nov 8 00:23:39.637741 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 8 00:23:39.637784 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 8 00:23:39.645095 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:39.660827 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 8 00:23:39.661108 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 8 00:23:39.662739 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 8 00:23:39.663160 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:23:39.683754 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 8 00:23:39.684081 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 8 00:23:39.688502 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 8 00:23:39.688751 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 8 00:23:39.696701 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 8 00:23:39.695542 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:39.714271 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:39.714317 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 8 00:23:39.714529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#94 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:23:39.742780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#295 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:23:39.758298 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: VF slot 1 added Nov 8 00:23:39.769239 kernel: hv_vmbus: registering driver hv_pci Nov 8 00:23:39.776787 kernel: hv_pci 2e871faa-0add-4e8d-8cd0-1fdecd8375ce: PCI VMBus probing: Using version 0x10004 Nov 8 00:23:39.777008 kernel: hv_pci 2e871faa-0add-4e8d-8cd0-1fdecd8375ce: PCI host bridge to bus 0add:00 Nov 8 00:23:39.783575 kernel: pci_bus 0add:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 8 00:23:39.787215 kernel: pci_bus 0add:00: No busn resource found for root bus, will use [bus 00-ff] Nov 8 00:23:39.793962 kernel: pci 0add:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 8 00:23:39.799796 kernel: pci 0add:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:39.803776 kernel: pci 0add:00:02.0: enabling Extended Tags Nov 8 00:23:39.816798 kernel: pci 0add:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 0add:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 8 00:23:39.824440 kernel: pci_bus 0add:00: busn_res: [bus 00-ff] end is updated to 00 Nov 8 00:23:39.824790 kernel: pci 0add:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 8 00:23:39.999767 kernel: mlx5_core 0add:00:02.0: enabling device (0000 -> 0002) Nov 8 00:23:40.004748 kernel: mlx5_core 0add:00:02.0: firmware version: 14.30.5006 Nov 8 00:23:40.198587 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 8 00:23:40.223752 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (451) Nov 8 00:23:40.230871 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: VF registering: eth1 Nov 8 00:23:40.231090 kernel: BTRFS: device fsid a2737782-a37e-42f9-8b56-489a87f47acc devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (460) Nov 8 00:23:40.240310 kernel: mlx5_core 0add:00:02.0 eth1: joined to eth0 Nov 8 00:23:40.249172 kernel: mlx5_core 0add:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 8 00:23:40.263755 kernel: mlx5_core 0add:00:02.0 enP2781s1: renamed from eth1 Nov 8 00:23:40.275110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:40.290440 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 8 00:23:40.294814 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 8 00:23:40.308281 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 8 00:23:40.323888 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:23:40.341778 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:40.348744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:41.364289 disk-uuid[603]: The operation has completed successfully. Nov 8 00:23:41.367635 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 8 00:23:41.445977 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:23:41.446101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:23:41.477914 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 8 00:23:41.486128 sh[716]: Success Nov 8 00:23:41.519771 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 8 00:23:41.777583 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:23:41.790863 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 8 00:23:41.797055 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 8 00:23:41.826347 kernel: BTRFS info (device dm-0): first mount of filesystem a2737782-a37e-42f9-8b56-489a87f47acc Nov 8 00:23:41.826434 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:41.830613 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 8 00:23:41.837456 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:23:41.840353 kernel: BTRFS info (device dm-0): using free space tree Nov 8 00:23:42.211799 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 8 00:23:42.213750 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:23:42.224132 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:23:42.230128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:23:42.263353 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.263432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:42.266641 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:42.304161 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:42.319918 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:42.319408 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 8 00:23:42.327416 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:42.341889 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:42.348641 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:23:42.359915 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:23:42.378932 systemd-networkd[898]: lo: Link UP Nov 8 00:23:42.378939 systemd-networkd[898]: lo: Gained carrier Nov 8 00:23:42.384879 systemd-networkd[898]: Enumeration completed Nov 8 00:23:42.386609 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:42.387077 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:42.387081 systemd-networkd[898]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:42.393059 systemd[1]: Reached target network.target - Network. Nov 8 00:23:42.445752 kernel: mlx5_core 0add:00:02.0 enP2781s1: Link up Nov 8 00:23:42.478768 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: Data path switched to VF: enP2781s1 Nov 8 00:23:42.479603 systemd-networkd[898]: enP2781s1: Link UP Nov 8 00:23:42.479750 systemd-networkd[898]: eth0: Link UP Nov 8 00:23:42.479915 systemd-networkd[898]: eth0: Gained carrier Nov 8 00:23:42.479928 systemd-networkd[898]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:42.493197 systemd-networkd[898]: enP2781s1: Gained carrier Nov 8 00:23:42.520783 systemd-networkd[898]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:43.351941 ignition[900]: Ignition 2.19.0 Nov 8 00:23:43.351955 ignition[900]: Stage: fetch-offline Nov 8 00:23:43.352007 ignition[900]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.352018 ignition[900]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.352144 ignition[900]: parsed url from cmdline: "" Nov 8 00:23:43.352150 ignition[900]: no config URL provided Nov 8 00:23:43.352157 ignition[900]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.352169 ignition[900]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.352176 ignition[900]: failed to fetch config: resource requires networking Nov 8 00:23:43.352489 ignition[900]: Ignition finished successfully Nov 8 00:23:43.371954 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:43.389942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 8 00:23:43.405831 ignition[909]: Ignition 2.19.0 Nov 8 00:23:43.405844 ignition[909]: Stage: fetch Nov 8 00:23:43.406077 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.406091 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.406210 ignition[909]: parsed url from cmdline: "" Nov 8 00:23:43.406214 ignition[909]: no config URL provided Nov 8 00:23:43.406219 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:23:43.406227 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:23:43.406248 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 8 00:23:43.530655 ignition[909]: GET result: OK Nov 8 00:23:43.530770 ignition[909]: config has been read from IMDS userdata Nov 8 00:23:43.530809 ignition[909]: parsing config with SHA512: 9c91a669a439a898a8193a470c9864f216ca995124385422287c30ad53438902622c89fb770753d8c741886a4e9e49d88d39ab24fef053ea7b4646b287a86c58 Nov 8 00:23:43.536750 unknown[909]: fetched base config from "system" Nov 8 00:23:43.536772 unknown[909]: fetched base config from "system" Nov 8 00:23:43.537693 ignition[909]: fetch: fetch complete Nov 8 00:23:43.536781 unknown[909]: fetched user config from "azure" Nov 8 00:23:43.537711 ignition[909]: fetch: fetch passed Nov 8 00:23:43.539313 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 8 00:23:43.537767 ignition[909]: Ignition finished successfully Nov 8 00:23:43.552285 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:23:43.568422 ignition[915]: Ignition 2.19.0 Nov 8 00:23:43.568434 ignition[915]: Stage: kargs Nov 8 00:23:43.571389 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:23:43.568674 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.568688 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.569506 ignition[915]: kargs: kargs passed Nov 8 00:23:43.569551 ignition[915]: Ignition finished successfully Nov 8 00:23:43.590296 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:23:43.606987 ignition[921]: Ignition 2.19.0 Nov 8 00:23:43.606999 ignition[921]: Stage: disks Nov 8 00:23:43.607231 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:43.609655 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:23:43.607245 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:43.616342 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:43.608123 ignition[921]: disks: disks passed Nov 8 00:23:43.608170 ignition[921]: Ignition finished successfully Nov 8 00:23:43.634045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:23:43.638068 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:43.641256 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:43.644531 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:43.664878 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:23:43.729742 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 8 00:23:43.735634 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:23:43.749880 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:23:43.850742 kernel: EXT4-fs (sda9): mounted filesystem 3cd35b5c-4e0e-45c1-abc9-cf70eebd42df r/w with ordered data mode. Quota mode: none. Nov 8 00:23:43.850911 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:23:43.852832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:43.887837 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:43.906750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (940) Nov 8 00:23:43.916249 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:43.916327 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:43.916495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:23:43.921531 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:43.929231 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 8 00:23:43.936969 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:43.937103 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:23:43.937146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:43.945203 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:43.952970 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:23:43.966945 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:23:44.118019 systemd-networkd[898]: eth0: Gained IPv6LL Nov 8 00:23:44.564631 coreos-metadata[957]: Nov 08 00:23:44.564 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:44.571735 coreos-metadata[957]: Nov 08 00:23:44.571 INFO Fetch successful Nov 8 00:23:44.575169 coreos-metadata[957]: Nov 08 00:23:44.571 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:44.585019 coreos-metadata[957]: Nov 08 00:23:44.584 INFO Fetch successful Nov 8 00:23:44.600207 coreos-metadata[957]: Nov 08 00:23:44.600 INFO wrote hostname ci-4081.3.6-n-2742f1d4ae to /sysroot/etc/hostname Nov 8 00:23:44.606430 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:23:44.608637 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:44.664371 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:23:44.673680 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:23:44.681000 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:23:45.613457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:45.624812 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:23:45.636223 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:23:45.643450 kernel: BTRFS info (device sda6): last unmount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:45.644269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:23:45.671492 ignition[1058]: INFO : Ignition 2.19.0 Nov 8 00:23:45.675117 ignition[1058]: INFO : Stage: mount Nov 8 00:23:45.675117 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:45.675117 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:45.675117 ignition[1058]: INFO : mount: mount passed Nov 8 00:23:45.675117 ignition[1058]: INFO : Ignition finished successfully Nov 8 00:23:45.677085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:23:45.693278 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:23:45.703357 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:23:45.718961 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:23:45.736738 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Nov 8 00:23:45.743959 kernel: BTRFS info (device sda6): first mount of filesystem 7b59d8a2-cf4e-4d67-8d1e-00d7f134f45e Nov 8 00:23:45.744019 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 8 00:23:45.744736 kernel: BTRFS info (device sda6): using free space tree Nov 8 00:23:45.755091 kernel: BTRFS info (device sda6): auto enabling async discard Nov 8 00:23:45.756567 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:23:45.781579 ignition[1087]: INFO : Ignition 2.19.0 Nov 8 00:23:45.781579 ignition[1087]: INFO : Stage: files Nov 8 00:23:45.787101 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:45.787101 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:45.787101 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:23:45.799413 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:23:45.799413 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:23:45.901384 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:23:45.905775 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:23:45.905775 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:23:45.901822 unknown[1087]: wrote ssh authorized keys file for user: core Nov 8 00:23:45.919758 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:45.925863 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 8 00:23:45.967546 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:23:46.003848 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.010080 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 8 00:23:46.307368 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 8 00:23:46.633471 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 8 00:23:46.633471 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 8 00:23:46.643933 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.649686 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:23:46.649686 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 8 00:23:46.659055 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.663252 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:23:46.667917 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.673219 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:23:46.673219 ignition[1087]: INFO : files: files passed Nov 8 00:23:46.673219 ignition[1087]: INFO : Ignition finished successfully Nov 8 00:23:46.673999 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:23:46.689965 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:23:46.699879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:23:46.705065 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:23:46.705169 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:23:46.723462 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.723462 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.737938 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:23:46.727444 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.733248 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.760995 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:23:46.784760 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:23:46.784884 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:23:46.792416 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:23:46.799419 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:23:46.800526 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:23:46.813912 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:23:46.830606 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.839965 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:23:46.853538 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:46.854897 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:46.855363 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:23:46.856475 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:23:46.856612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:23:46.857517 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:23:46.858110 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:23:46.858633 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:23:46.859190 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:23:46.859715 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:23:46.860224 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:23:46.860790 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:23:46.861368 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:23:46.861878 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:23:46.862366 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:23:46.862968 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:23:46.863100 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:23:46.863993 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:46.993151 ignition[1139]: INFO : Ignition 2.19.0 Nov 8 00:23:46.993151 ignition[1139]: INFO : Stage: umount Nov 8 00:23:46.993151 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:23:46.993151 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 8 00:23:46.993151 ignition[1139]: INFO : umount: umount passed Nov 8 00:23:46.993151 ignition[1139]: INFO : Ignition finished successfully Nov 8 00:23:46.864678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:46.865146 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:23:46.913279 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:46.917487 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:23:46.917626 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:23:46.923556 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:23:46.923670 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:23:46.924066 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:23:46.924157 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:23:46.924584 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 8 00:23:46.924671 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 8 00:23:46.972905 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:23:46.976028 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:23:46.976273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:47.002716 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:23:47.009963 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:23:47.010160 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:47.018053 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:23:47.018161 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:23:47.031114 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:23:47.031210 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:23:47.038120 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:23:47.038229 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:23:47.046606 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:23:47.046688 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:23:47.059920 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:23:47.059988 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:23:47.066613 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 8 00:23:47.066679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 8 00:23:47.073106 systemd[1]: Stopped target network.target - Network. Nov 8 00:23:47.083189 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:23:47.083267 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:23:47.089864 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:23:47.095792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:23:47.099346 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:47.103130 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:23:47.106065 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:23:47.107314 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:23:47.107359 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:23:47.110420 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:23:47.110460 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:23:47.111057 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:23:47.111099 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:23:47.111705 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:23:47.112213 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:23:47.216284 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:23:47.219374 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:23:47.226296 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:23:47.232738 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:23:47.232810 systemd-networkd[898]: eth0: DHCPv6 lease lost Nov 8 00:23:47.232863 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:23:47.241529 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:23:47.241657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:23:47.245868 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:23:47.245938 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:47.270856 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:23:47.273541 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:23:47.273615 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:23:47.277498 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:23:47.277544 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:47.286661 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:23:47.286708 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:47.293810 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:23:47.293866 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:47.300774 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:47.330220 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:23:47.330390 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:47.335343 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:23:47.335423 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:47.340263 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:23:47.340311 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:47.343528 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:23:47.343574 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:23:47.351126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:23:47.351173 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:23:47.358005 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:23:47.358054 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:23:47.362402 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:23:47.388147 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: Data path switched from VF: enP2781s1 Nov 8 00:23:47.396681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:23:47.396907 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:47.403866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:47.403918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:47.411676 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:23:47.411796 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:23:47.431638 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:23:47.437259 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:23:47.565350 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:23:47.565487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:23:47.572045 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:23:47.581457 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:23:47.581545 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:23:47.595949 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:23:47.620952 systemd[1]: Switching root. Nov 8 00:23:47.693337 systemd-journald[176]: Journal stopped Nov 8 00:23:52.764301 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 8 00:23:52.764331 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:23:52.764342 kernel: SELinux: policy capability open_perms=1 Nov 8 00:23:52.764354 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:23:52.764362 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:23:52.764370 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:23:52.764383 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:23:52.764394 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:23:52.764403 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:23:52.764414 kernel: audit: type=1403 audit(1762561429.039:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:23:52.764426 systemd[1]: Successfully loaded SELinux policy in 113.626ms. Nov 8 00:23:52.764442 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.817ms. Nov 8 00:23:52.764453 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 8 00:23:52.764462 systemd[1]: Detected virtualization microsoft. Nov 8 00:23:52.764477 systemd[1]: Detected architecture x86-64. Nov 8 00:23:52.764487 systemd[1]: Detected first boot. Nov 8 00:23:52.764501 systemd[1]: Hostname set to . Nov 8 00:23:52.764510 systemd[1]: Initializing machine ID from random generator. Nov 8 00:23:52.764520 zram_generator::config[1181]: No configuration found. Nov 8 00:23:52.764536 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:23:52.764545 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:23:52.764558 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:23:52.764568 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:23:52.764579 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:23:52.764592 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:23:52.764602 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:23:52.764618 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:23:52.764628 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:23:52.764639 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:23:52.764651 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:23:52.764661 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:23:52.764675 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:23:52.764686 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:23:52.764695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:23:52.764711 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:23:52.764732 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:23:52.764744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:23:52.764754 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 8 00:23:52.764768 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:23:52.764778 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:23:52.764798 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:23:52.764809 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:23:52.764823 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:23:52.764836 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:23:52.764846 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:23:52.764859 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:23:52.764870 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:23:52.764881 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:23:52.764893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:23:52.764906 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:23:52.764919 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:23:52.764931 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:23:52.764945 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:23:52.764955 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:23:52.764971 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:23:52.764981 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:23:52.764994 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:52.765005 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:23:52.765017 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:23:52.765029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:23:52.765040 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:23:52.765054 systemd[1]: Reached target machines.target - Containers. Nov 8 00:23:52.765067 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:23:52.765078 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:52.765091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:23:52.765102 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:23:52.765112 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:52.765124 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:52.765136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:52.765147 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:23:52.765160 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:52.765173 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:23:52.765199 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:23:52.765215 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:23:52.765225 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:23:52.765239 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:23:52.765251 kernel: loop: module loaded Nov 8 00:23:52.765261 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:23:52.765275 kernel: fuse: init (API version 7.39) Nov 8 00:23:52.765287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:23:52.765301 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:23:52.765313 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:23:52.765326 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:23:52.765339 kernel: ACPI: bus type drm_connector registered Nov 8 00:23:52.765367 systemd-journald[1287]: Collecting audit messages is disabled. Nov 8 00:23:52.765415 systemd[1]: verity-setup.service: Deactivated successfully. Nov 8 00:23:52.765438 systemd-journald[1287]: Journal started Nov 8 00:23:52.765475 systemd-journald[1287]: Runtime Journal (/run/log/journal/bedb330901974138b0007e6e54feb6d0) is 8.0M, max 158.8M, 150.8M free. Nov 8 00:23:51.852150 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:23:52.034360 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 8 00:23:52.034782 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:23:52.773911 systemd[1]: Stopped verity-setup.service. Nov 8 00:23:52.773998 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:52.790071 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:23:52.791002 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:23:52.794683 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:23:52.798206 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:23:52.801308 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:23:52.804832 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:23:52.808255 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:23:52.811672 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:23:52.815556 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:23:52.820037 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:23:52.820210 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:23:52.824410 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:52.824605 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:52.828935 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:52.829093 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:52.832626 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:52.832819 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:52.836951 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:23:52.837108 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:23:52.840793 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:52.840968 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:52.845082 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:23:52.849406 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:23:52.854029 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:23:52.875292 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:23:52.887792 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:23:52.900787 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:23:52.904942 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:23:52.905070 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:23:52.912476 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 8 00:23:52.922904 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:23:52.927764 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:23:52.931431 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:52.933599 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:23:52.946916 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:23:52.950926 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:52.951952 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:23:52.955199 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:52.959849 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:23:52.968859 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:23:52.974323 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:23:52.987487 systemd-journald[1287]: Time spent on flushing to /var/log/journal/bedb330901974138b0007e6e54feb6d0 is 40.376ms for 956 entries. Nov 8 00:23:52.987487 systemd-journald[1287]: System Journal (/var/log/journal/bedb330901974138b0007e6e54feb6d0) is 8.0M, max 2.6G, 2.6G free. Nov 8 00:23:53.145510 systemd-journald[1287]: Received client request to flush runtime journal. Nov 8 00:23:53.145573 kernel: loop0: detected capacity change from 0 to 140768 Nov 8 00:23:52.982782 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:23:52.994220 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:23:52.999041 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:23:53.003575 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:23:53.011814 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:23:53.017243 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:23:53.027951 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 8 00:23:53.033687 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 8 00:23:53.059791 udevadm[1328]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 8 00:23:53.071792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:23:53.147099 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:23:53.170585 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:23:53.171428 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 8 00:23:53.244516 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:23:53.254995 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:23:53.384702 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Nov 8 00:23:53.384740 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Nov 8 00:23:53.390939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:23:53.654750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:23:53.680747 kernel: loop1: detected capacity change from 0 to 219144 Nov 8 00:23:53.750960 kernel: loop2: detected capacity change from 0 to 31056 Nov 8 00:23:53.989886 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:23:53.998982 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:23:54.026402 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Nov 8 00:23:54.203754 kernel: loop3: detected capacity change from 0 to 142488 Nov 8 00:23:54.214024 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:23:54.225810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:23:54.314955 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:23:54.326707 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 8 00:23:54.404752 kernel: mousedev: PS/2 mouse device common for all mice Nov 8 00:23:54.407536 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:23:54.423770 kernel: hv_vmbus: registering driver hyperv_fb Nov 8 00:23:54.427893 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 8 00:23:54.433907 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 8 00:23:54.455169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#75 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Nov 8 00:23:54.467642 kernel: Console: switching to colour dummy device 80x25 Nov 8 00:23:54.468788 kernel: hv_vmbus: registering driver hv_balloon Nov 8 00:23:54.469784 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 8 00:23:54.484342 kernel: Console: switching to colour frame buffer device 128x48 Nov 8 00:23:54.619307 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:54.652476 systemd-networkd[1352]: lo: Link UP Nov 8 00:23:54.654768 systemd-networkd[1352]: lo: Gained carrier Nov 8 00:23:54.664265 systemd-networkd[1352]: Enumeration completed Nov 8 00:23:54.665507 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:23:54.673041 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:54.676017 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:23:54.678479 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:23:54.698916 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1360) Nov 8 00:23:54.703928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:54.704371 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:54.774569 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:54.804745 kernel: mlx5_core 0add:00:02.0 enP2781s1: Link up Nov 8 00:23:54.833804 kernel: hv_netvsc 000d3ab4-9ac3-000d-3ab4-9ac3000d3ab4 eth0: Data path switched to VF: enP2781s1 Nov 8 00:23:54.840079 systemd-networkd[1352]: enP2781s1: Link UP Nov 8 00:23:54.840225 systemd-networkd[1352]: eth0: Link UP Nov 8 00:23:54.840230 systemd-networkd[1352]: eth0: Gained carrier Nov 8 00:23:54.840252 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:23:54.844122 kernel: loop4: detected capacity change from 0 to 140768 Nov 8 00:23:54.853132 systemd-networkd[1352]: enP2781s1: Gained carrier Nov 8 00:23:54.864815 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 8 00:23:54.899744 kernel: loop5: detected capacity change from 0 to 219144 Nov 8 00:23:54.903798 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:23:54.904927 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:23:54.905147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:54.938632 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:23:54.947925 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 8 00:23:54.957862 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:23:54.977744 kernel: loop6: detected capacity change from 0 to 31056 Nov 8 00:23:54.980885 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 8 00:23:54.988040 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 8 00:23:55.000739 kernel: loop7: detected capacity change from 0 to 142488 Nov 8 00:23:55.019282 (sd-merge)[1432]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 8 00:23:55.019868 (sd-merge)[1432]: Merged extensions into '/usr'. Nov 8 00:23:55.024005 systemd[1]: Reloading requested from client PID 1317 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:23:55.024022 systemd[1]: Reloading... Nov 8 00:23:55.054292 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:55.118759 zram_generator::config[1473]: No configuration found. Nov 8 00:23:55.282958 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:55.377336 systemd[1]: Reloading finished in 352 ms. Nov 8 00:23:55.412342 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:23:55.417174 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:23:55.421900 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:23:55.426013 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 8 00:23:55.437671 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:23:55.445926 systemd[1]: Starting ensure-sysext.service... Nov 8 00:23:55.451908 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 8 00:23:55.460919 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:23:55.469876 systemd[1]: Reloading requested from client PID 1536 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:23:55.471858 lvm[1537]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 8 00:23:55.469893 systemd[1]: Reloading... Nov 8 00:23:55.541764 zram_generator::config[1565]: No configuration found. Nov 8 00:23:55.548032 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:23:55.548556 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:23:55.552003 systemd-tmpfiles[1538]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:23:55.552453 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Nov 8 00:23:55.552553 systemd-tmpfiles[1538]: ACLs are not supported, ignoring. Nov 8 00:23:55.573981 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:55.573997 systemd-tmpfiles[1538]: Skipping /boot Nov 8 00:23:55.586281 systemd-tmpfiles[1538]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:23:55.586301 systemd-tmpfiles[1538]: Skipping /boot Nov 8 00:23:55.696476 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:23:55.777652 systemd[1]: Reloading finished in 307 ms. Nov 8 00:23:55.798184 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 8 00:23:55.803085 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:23:55.820960 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:23:55.829923 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:23:55.836950 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:23:55.845837 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:23:55.851068 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:23:55.861013 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:55.861439 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:55.869558 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:55.877635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:55.886069 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:55.889882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:55.890032 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:55.893082 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:55.894101 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:55.906052 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:55.906901 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:55.912218 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:55.912535 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:55.928687 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:23:55.937138 systemd[1]: Finished ensure-sysext.service. Nov 8 00:23:55.943067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:55.943620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:23:55.950856 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:23:55.955899 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:23:55.967953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:23:55.981962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:23:55.985678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:23:55.985790 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:23:55.989436 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 8 00:23:55.989990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:23:55.990163 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:23:55.994043 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:23:55.994227 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:23:56.000903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:23:56.001788 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:23:56.006360 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:23:56.007065 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:23:56.008929 systemd-resolved[1635]: Positive Trust Anchors: Nov 8 00:23:56.008936 systemd-resolved[1635]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:23:56.008975 systemd-resolved[1635]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:23:56.012611 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:23:56.022434 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:23:56.022499 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:23:56.026124 systemd-resolved[1635]: Using system hostname 'ci-4081.3.6-n-2742f1d4ae'. Nov 8 00:23:56.028224 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:23:56.032275 systemd[1]: Reached target network.target - Network. Nov 8 00:23:56.034908 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:23:56.039962 augenrules[1664]: No rules Nov 8 00:23:56.041161 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:23:56.449474 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:23:56.453774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:23:56.725970 systemd-networkd[1352]: eth0: Gained IPv6LL Nov 8 00:23:56.728923 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:23:56.733253 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:23:59.027733 ldconfig[1312]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:23:59.044580 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:23:59.054957 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:23:59.064673 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:23:59.068668 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:23:59.072076 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:23:59.076001 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:23:59.080276 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:23:59.083546 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:23:59.087424 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:23:59.091270 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:23:59.091307 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:23:59.094542 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:23:59.099204 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:23:59.104150 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:23:59.115641 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:23:59.119496 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:23:59.122861 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:23:59.125713 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:23:59.128491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:59.128522 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:23:59.138830 systemd[1]: Starting chronyd.service - NTP client/server... Nov 8 00:23:59.144858 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:23:59.152999 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 8 00:23:59.158914 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:23:59.165861 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:23:59.178564 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:23:59.182073 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:23:59.182125 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 8 00:23:59.184937 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 8 00:23:59.188922 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 8 00:23:59.198070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:23:59.205033 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:23:59.210137 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:23:59.220865 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:23:59.226652 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:23:59.230798 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:23:59.237108 jq[1685]: false Nov 8 00:23:59.242211 KVP[1687]: KVP starting; pid is:1687 Nov 8 00:23:59.252897 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:23:59.256667 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:23:59.257391 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:23:59.260906 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:23:59.273831 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:23:59.285202 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:23:59.285562 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:23:59.293775 jq[1699]: true Nov 8 00:23:59.292324 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 8 00:23:59.304124 KVP[1687]: KVP LIC Version: 3.1 Nov 8 00:23:59.306796 kernel: hv_utils: KVP IC version 4.0 Nov 8 00:23:59.322565 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:23:59.323503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:23:59.337755 extend-filesystems[1686]: Found loop4 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found loop5 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found loop6 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found loop7 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda1 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda2 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda3 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found usr Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda4 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda6 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda7 Nov 8 00:23:59.347031 extend-filesystems[1686]: Found sda9 Nov 8 00:23:59.347031 extend-filesystems[1686]: Checking size of /dev/sda9 Nov 8 00:23:59.524832 jq[1704]: true Nov 8 00:23:59.524954 coreos-metadata[1683]: Nov 08 00:23:59.499 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 8 00:23:59.524954 coreos-metadata[1683]: Nov 08 00:23:59.514 INFO Fetch successful Nov 8 00:23:59.524954 coreos-metadata[1683]: Nov 08 00:23:59.516 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 8 00:23:59.525276 extend-filesystems[1686]: Old size kept for /dev/sda9 Nov 8 00:23:59.525276 extend-filesystems[1686]: Found sr0 Nov 8 00:23:59.539769 update_engine[1698]: I20251108 00:23:59.388584 1698 main.cc:92] Flatcar Update Engine starting Nov 8 00:23:59.539769 update_engine[1698]: I20251108 00:23:59.390784 1698 update_check_scheduler.cc:74] Next update check in 7m55s Nov 8 00:23:59.364502 dbus-daemon[1684]: [system] SELinux support is enabled Nov 8 00:23:59.364708 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:23:59.544239 tar[1703]: linux-amd64/LICENSE Nov 8 00:23:59.544239 tar[1703]: linux-amd64/helm Nov 8 00:23:59.544586 coreos-metadata[1683]: Nov 08 00:23:59.529 INFO Fetch successful Nov 8 00:23:59.544586 coreos-metadata[1683]: Nov 08 00:23:59.530 INFO Fetching http://168.63.129.16/machine/073c96ba-9b53-4e55-a639-c494e835f059/d0f5165d%2D4953%2D4b15%2Dad42%2D7bf74424a7ed.%5Fci%2D4081.3.6%2Dn%2D2742f1d4ae?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 8 00:23:59.544586 coreos-metadata[1683]: Nov 08 00:23:59.532 INFO Fetch successful Nov 8 00:23:59.544586 coreos-metadata[1683]: Nov 08 00:23:59.532 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 8 00:23:59.394451 chronyd[1731]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 8 00:23:59.371289 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:23:59.449028 chronyd[1731]: Timezone right/UTC failed leap second check, ignoring Nov 8 00:23:59.371332 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:23:59.553005 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:23:59.449235 chronyd[1731]: Loaded seccomp filter (level 2) Nov 8 00:23:59.553225 coreos-metadata[1683]: Nov 08 00:23:59.548 INFO Fetch successful Nov 8 00:23:59.374052 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:23:59.374073 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:23:59.396003 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:23:59.425579 (ntainerd)[1730]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:23:59.425607 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:23:59.433980 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:23:59.434409 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:23:59.452891 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:23:59.465968 systemd[1]: Started chronyd.service - NTP client/server. Nov 8 00:23:59.476674 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:23:59.477583 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:23:59.500804 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:23:59.507815 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:23:59.570998 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 8 00:23:59.575494 systemd-logind[1697]: New seat seat0. Nov 8 00:23:59.578325 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:23:59.650865 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 8 00:23:59.664159 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:23:59.735772 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1765) Nov 8 00:23:59.916090 locksmithd[1753]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:24:00.275214 sshd_keygen[1719]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:24:00.308085 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:24:00.321567 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:24:00.342057 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 8 00:24:00.356771 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:24:00.357762 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:24:00.371710 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:24:00.443061 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 8 00:24:00.452664 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:24:00.467132 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:24:00.472163 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 8 00:24:00.477547 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:24:00.516989 tar[1703]: linux-amd64/README.md Nov 8 00:24:00.528150 containerd[1730]: time="2025-11-08T00:24:00.527667700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 8 00:24:00.532618 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:24:00.562874 containerd[1730]: time="2025-11-08T00:24:00.562822500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564617500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564656400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564677500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564856400Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564878800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564951900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.564970600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.565190200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.565212500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.565231500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565457 containerd[1730]: time="2025-11-08T00:24:00.565246500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565902 containerd[1730]: time="2025-11-08T00:24:00.565336400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565902 containerd[1730]: time="2025-11-08T00:24:00.565567300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565902 containerd[1730]: time="2025-11-08T00:24:00.565786800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 8 00:24:00.565902 containerd[1730]: time="2025-11-08T00:24:00.565810500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 8 00:24:00.566042 containerd[1730]: time="2025-11-08T00:24:00.565924000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 8 00:24:00.566042 containerd[1730]: time="2025-11-08T00:24:00.565985200Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:24:00.582593 containerd[1730]: time="2025-11-08T00:24:00.582553100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 8 00:24:00.582801 containerd[1730]: time="2025-11-08T00:24:00.582780900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 8 00:24:00.582923 containerd[1730]: time="2025-11-08T00:24:00.582910900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 8 00:24:00.583001 containerd[1730]: time="2025-11-08T00:24:00.582990800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 8 00:24:00.583052 containerd[1730]: time="2025-11-08T00:24:00.583038600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 8 00:24:00.583279 containerd[1730]: time="2025-11-08T00:24:00.583261000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 8 00:24:00.583701 containerd[1730]: time="2025-11-08T00:24:00.583684400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 8 00:24:00.583943 containerd[1730]: time="2025-11-08T00:24:00.583914700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584022000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584099700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584123000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584145900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584163700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584184500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584206300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584225400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584244700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584263300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584290900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584309900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584328000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.584763 containerd[1730]: time="2025-11-08T00:24:00.584360400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584382200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584418300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584439700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584459000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584477700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584499300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584516100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584533600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584551400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584572800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584600800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584624800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584641500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 8 00:24:00.585076 containerd[1730]: time="2025-11-08T00:24:00.584697000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 8 00:24:00.585380 containerd[1730]: time="2025-11-08T00:24:00.585363200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 8 00:24:00.585439 containerd[1730]: time="2025-11-08T00:24:00.585417500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 8 00:24:00.585487 containerd[1730]: time="2025-11-08T00:24:00.585478000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 8 00:24:00.585537 containerd[1730]: time="2025-11-08T00:24:00.585528900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.585579 containerd[1730]: time="2025-11-08T00:24:00.585571000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 8 00:24:00.585689 containerd[1730]: time="2025-11-08T00:24:00.585607800Z" level=info msg="NRI interface is disabled by configuration." Nov 8 00:24:00.585689 containerd[1730]: time="2025-11-08T00:24:00.585620600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 8 00:24:00.586064 containerd[1730]: time="2025-11-08T00:24:00.585998000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 8 00:24:00.586064 containerd[1730]: time="2025-11-08T00:24:00.586064500Z" level=info msg="Connect containerd service" Nov 8 00:24:00.586310 containerd[1730]: time="2025-11-08T00:24:00.586111100Z" level=info msg="using legacy CRI server" Nov 8 00:24:00.586310 containerd[1730]: time="2025-11-08T00:24:00.586122500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:24:00.586310 containerd[1730]: time="2025-11-08T00:24:00.586255100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.586898500Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587296600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587350700Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587416400Z" level=info msg="Start subscribing containerd event" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587460100Z" level=info msg="Start recovering state" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587545100Z" level=info msg="Start event monitor" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587567600Z" level=info msg="Start snapshots syncer" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587579200Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:24:00.587613 containerd[1730]: time="2025-11-08T00:24:00.587589900Z" level=info msg="Start streaming server" Nov 8 00:24:00.592782 containerd[1730]: time="2025-11-08T00:24:00.589132900Z" level=info msg="containerd successfully booted in 0.062528s" Nov 8 00:24:00.587765 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:24:01.292533 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:01.298222 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:24:01.303200 systemd[1]: Startup finished in 1.155s (kernel) + 10.969s (initrd) + 12.376s (userspace) = 24.501s. Nov 8 00:24:01.312173 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:01.696025 login[1828]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:24:01.701351 login[1829]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 8 00:24:01.712948 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:24:01.720032 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:24:01.722219 systemd-logind[1697]: New session 1 of user core. Nov 8 00:24:01.725655 systemd-logind[1697]: New session 2 of user core. Nov 8 00:24:01.755012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:24:01.764074 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:24:01.771992 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:24:01.963623 systemd[1852]: Queued start job for default target default.target. Nov 8 00:24:01.968975 systemd[1852]: Created slice app.slice - User Application Slice. Nov 8 00:24:01.969009 systemd[1852]: Reached target paths.target - Paths. Nov 8 00:24:01.969029 systemd[1852]: Reached target timers.target - Timers. Nov 8 00:24:01.970466 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:24:01.998184 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:24:01.998256 systemd[1852]: Reached target sockets.target - Sockets. Nov 8 00:24:01.998275 systemd[1852]: Reached target basic.target - Basic System. Nov 8 00:24:01.998321 systemd[1852]: Reached target default.target - Main User Target. Nov 8 00:24:01.998357 systemd[1852]: Startup finished in 217ms. Nov 8 00:24:01.998455 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:24:02.003928 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:24:02.006150 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:24:02.171908 kubelet[1840]: E1108 00:24:02.171871 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:02.176209 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:02.177066 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:02.389062 waagent[1826]: 2025-11-08T00:24:02.388955Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.390574Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.391608Z INFO Daemon Daemon Python: 3.11.9 Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.392852Z INFO Daemon Daemon Run daemon Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.393306Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.394240Z INFO Daemon Daemon Using waagent for provisioning Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.395434Z INFO Daemon Daemon Activate resource disk Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.396359Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.400941Z INFO Daemon Daemon Found device: None Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.402315Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.402920Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.404923Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:24:02.433315 waagent[1826]: 2025-11-08T00:24:02.405200Z INFO Daemon Daemon Running default provisioning handler Nov 8 00:24:02.436432 waagent[1826]: 2025-11-08T00:24:02.436356Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 8 00:24:02.444314 waagent[1826]: 2025-11-08T00:24:02.444249Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 8 00:24:02.450093 waagent[1826]: 2025-11-08T00:24:02.450031Z INFO Daemon Daemon cloud-init is enabled: False Nov 8 00:24:02.455271 waagent[1826]: 2025-11-08T00:24:02.451696Z INFO Daemon Daemon Copying ovf-env.xml Nov 8 00:24:02.584939 waagent[1826]: 2025-11-08T00:24:02.583274Z INFO Daemon Daemon Successfully mounted dvd Nov 8 00:24:02.599178 waagent[1826]: 2025-11-08T00:24:02.598853Z INFO Daemon Daemon Detect protocol endpoint Nov 8 00:24:02.599034 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 8 00:24:02.602105 waagent[1826]: 2025-11-08T00:24:02.602037Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 8 00:24:02.605558 waagent[1826]: 2025-11-08T00:24:02.605506Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 8 00:24:02.617785 waagent[1826]: 2025-11-08T00:24:02.606828Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 8 00:24:02.617785 waagent[1826]: 2025-11-08T00:24:02.607570Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 8 00:24:02.617785 waagent[1826]: 2025-11-08T00:24:02.608467Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 8 00:24:02.640835 waagent[1826]: 2025-11-08T00:24:02.640676Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 8 00:24:02.650619 waagent[1826]: 2025-11-08T00:24:02.642435Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 8 00:24:02.650619 waagent[1826]: 2025-11-08T00:24:02.643385Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 8 00:24:02.708032 waagent[1826]: 2025-11-08T00:24:02.707931Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 8 00:24:02.712239 waagent[1826]: 2025-11-08T00:24:02.712054Z INFO Daemon Daemon Forcing an update of the goal state. Nov 8 00:24:02.718614 waagent[1826]: 2025-11-08T00:24:02.718537Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:24:02.738620 waagent[1826]: 2025-11-08T00:24:02.738516Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 8 00:24:02.758561 waagent[1826]: 2025-11-08T00:24:02.741327Z INFO Daemon Nov 8 00:24:02.758561 waagent[1826]: 2025-11-08T00:24:02.743511Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 54d997a9-5f20-428d-83c6-9c566f17b3a3 eTag: 13107081559939259298 source: Fabric] Nov 8 00:24:02.758561 waagent[1826]: 2025-11-08T00:24:02.745226Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 8 00:24:02.758561 waagent[1826]: 2025-11-08T00:24:02.746007Z INFO Daemon Nov 8 00:24:02.758561 waagent[1826]: 2025-11-08T00:24:02.746544Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:24:02.761268 waagent[1826]: 2025-11-08T00:24:02.761226Z INFO Daemon Daemon Downloading artifacts profile blob Nov 8 00:24:02.881182 waagent[1826]: 2025-11-08T00:24:02.881094Z INFO Daemon Downloaded certificate {'thumbprint': '3F1D174BD29838D322098F4AF5D0DB7FA98D1C77', 'hasPrivateKey': True} Nov 8 00:24:02.884825 waagent[1826]: 2025-11-08T00:24:02.884760Z INFO Daemon Fetch goal state completed Nov 8 00:24:02.920341 waagent[1826]: 2025-11-08T00:24:02.920181Z INFO Daemon Daemon Starting provisioning Nov 8 00:24:02.924233 waagent[1826]: 2025-11-08T00:24:02.924139Z INFO Daemon Daemon Handle ovf-env.xml. Nov 8 00:24:02.927048 waagent[1826]: 2025-11-08T00:24:02.925673Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-2742f1d4ae] Nov 8 00:24:02.939262 waagent[1826]: 2025-11-08T00:24:02.939188Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-2742f1d4ae] Nov 8 00:24:02.949644 waagent[1826]: 2025-11-08T00:24:02.941170Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 8 00:24:02.949644 waagent[1826]: 2025-11-08T00:24:02.941739Z INFO Daemon Daemon Primary interface is [eth0] Nov 8 00:24:02.964737 systemd-networkd[1352]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 8 00:24:02.964749 systemd-networkd[1352]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:24:02.964797 systemd-networkd[1352]: eth0: DHCP lease lost Nov 8 00:24:02.966194 waagent[1826]: 2025-11-08T00:24:02.966096Z INFO Daemon Daemon Create user account if not exists Nov 8 00:24:02.969822 systemd-networkd[1352]: eth0: DHCPv6 lease lost Nov 8 00:24:02.977849 waagent[1826]: 2025-11-08T00:24:02.969832Z INFO Daemon Daemon User core already exists, skip useradd Nov 8 00:24:02.977849 waagent[1826]: 2025-11-08T00:24:02.971144Z INFO Daemon Daemon Configure sudoer Nov 8 00:24:02.977849 waagent[1826]: 2025-11-08T00:24:02.972104Z INFO Daemon Daemon Configure sshd Nov 8 00:24:02.977849 waagent[1826]: 2025-11-08T00:24:02.972924Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 8 00:24:02.977849 waagent[1826]: 2025-11-08T00:24:02.973754Z INFO Daemon Daemon Deploy ssh public key. Nov 8 00:24:03.024777 systemd-networkd[1352]: eth0: DHCPv4 address 10.200.8.42/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 8 00:24:12.289084 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:24:12.295943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:12.405531 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:12.410118 (kubelet)[1913]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:13.130365 kubelet[1913]: E1108 00:24:13.130293 1913 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:13.134105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:13.134326 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:23.268947 chronyd[1731]: Selected source PHC0 Nov 8 00:24:23.289191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 8 00:24:23.295933 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:23.411341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:23.422052 (kubelet)[1928]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:23.461007 kubelet[1928]: E1108 00:24:23.460950 1928 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:23.463752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:23.463967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:33.055958 waagent[1826]: 2025-11-08T00:24:33.055893Z INFO Daemon Daemon Provisioning complete Nov 8 00:24:33.068976 waagent[1826]: 2025-11-08T00:24:33.068906Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 8 00:24:33.078553 waagent[1826]: 2025-11-08T00:24:33.070340Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 8 00:24:33.078553 waagent[1826]: 2025-11-08T00:24:33.071875Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 8 00:24:33.197020 waagent[1935]: 2025-11-08T00:24:33.196921Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 8 00:24:33.197445 waagent[1935]: 2025-11-08T00:24:33.197092Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 8 00:24:33.197445 waagent[1935]: 2025-11-08T00:24:33.197178Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 8 00:24:33.265145 waagent[1935]: 2025-11-08T00:24:33.265039Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 8 00:24:33.265401 waagent[1935]: 2025-11-08T00:24:33.265345Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:33.265508 waagent[1935]: 2025-11-08T00:24:33.265462Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:33.273658 waagent[1935]: 2025-11-08T00:24:33.273586Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 8 00:24:33.284630 waagent[1935]: 2025-11-08T00:24:33.284569Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 8 00:24:33.285118 waagent[1935]: 2025-11-08T00:24:33.285056Z INFO ExtHandler Nov 8 00:24:33.285197 waagent[1935]: 2025-11-08T00:24:33.285157Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f4240950-a593-4d3e-a40f-4e43884efa6a eTag: 13107081559939259298 source: Fabric] Nov 8 00:24:33.285520 waagent[1935]: 2025-11-08T00:24:33.285466Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 8 00:24:33.286139 waagent[1935]: 2025-11-08T00:24:33.286080Z INFO ExtHandler Nov 8 00:24:33.286212 waagent[1935]: 2025-11-08T00:24:33.286172Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 8 00:24:33.289761 waagent[1935]: 2025-11-08T00:24:33.289702Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 8 00:24:33.366326 waagent[1935]: 2025-11-08T00:24:33.366197Z INFO ExtHandler Downloaded certificate {'thumbprint': '3F1D174BD29838D322098F4AF5D0DB7FA98D1C77', 'hasPrivateKey': True} Nov 8 00:24:33.366852 waagent[1935]: 2025-11-08T00:24:33.366794Z INFO ExtHandler Fetch goal state completed Nov 8 00:24:33.380337 waagent[1935]: 2025-11-08T00:24:33.380270Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1935 Nov 8 00:24:33.380497 waagent[1935]: 2025-11-08T00:24:33.380448Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 8 00:24:33.382055 waagent[1935]: 2025-11-08T00:24:33.381996Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 8 00:24:33.382413 waagent[1935]: 2025-11-08T00:24:33.382364Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 8 00:24:33.413156 waagent[1935]: 2025-11-08T00:24:33.413107Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 8 00:24:33.413370 waagent[1935]: 2025-11-08T00:24:33.413321Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 8 00:24:33.419991 waagent[1935]: 2025-11-08T00:24:33.419945Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 8 00:24:33.427323 systemd[1]: Reloading requested from client PID 1948 ('systemctl') (unit waagent.service)... Nov 8 00:24:33.427342 systemd[1]: Reloading... Nov 8 00:24:33.507806 zram_generator::config[1978]: No configuration found. Nov 8 00:24:33.636089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:33.717618 systemd[1]: Reloading finished in 289 ms. Nov 8 00:24:33.740482 waagent[1935]: 2025-11-08T00:24:33.739959Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 8 00:24:33.747712 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 8 00:24:33.749888 systemd[1]: Reloading requested from client PID 2039 ('systemctl') (unit waagent.service)... Nov 8 00:24:33.749906 systemd[1]: Reloading... Nov 8 00:24:33.835749 zram_generator::config[2073]: No configuration found. Nov 8 00:24:33.960917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:24:34.043340 systemd[1]: Reloading finished in 293 ms. Nov 8 00:24:34.071255 waagent[1935]: 2025-11-08T00:24:34.068034Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 8 00:24:34.071255 waagent[1935]: 2025-11-08T00:24:34.068252Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 8 00:24:34.077012 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:34.869769 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:34.878067 (kubelet)[2144]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:34.916426 kubelet[2144]: E1108 00:24:34.916365 2144 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:34.918938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:34.919155 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:35.058346 waagent[1935]: 2025-11-08T00:24:35.058250Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 8 00:24:35.059058 waagent[1935]: 2025-11-08T00:24:35.058981Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 8 00:24:35.059833 waagent[1935]: 2025-11-08T00:24:35.059775Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 8 00:24:35.059971 waagent[1935]: 2025-11-08T00:24:35.059921Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:35.060374 waagent[1935]: 2025-11-08T00:24:35.060326Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 8 00:24:35.060539 waagent[1935]: 2025-11-08T00:24:35.060494Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:35.061027 waagent[1935]: 2025-11-08T00:24:35.060976Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 8 00:24:35.061236 waagent[1935]: 2025-11-08T00:24:35.061191Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 8 00:24:35.061354 waagent[1935]: 2025-11-08T00:24:35.061256Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 8 00:24:35.061675 waagent[1935]: 2025-11-08T00:24:35.061625Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 8 00:24:35.062101 waagent[1935]: 2025-11-08T00:24:35.061953Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 8 00:24:35.062248 waagent[1935]: 2025-11-08T00:24:35.062190Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 8 00:24:35.062389 waagent[1935]: 2025-11-08T00:24:35.062302Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 8 00:24:35.063066 waagent[1935]: 2025-11-08T00:24:35.063011Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 8 00:24:35.063066 waagent[1935]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 8 00:24:35.063066 waagent[1935]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 8 00:24:35.063066 waagent[1935]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 8 00:24:35.063066 waagent[1935]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:35.063066 waagent[1935]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:35.063066 waagent[1935]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 8 00:24:35.063630 waagent[1935]: 2025-11-08T00:24:35.063575Z INFO EnvHandler ExtHandler Configure routes Nov 8 00:24:35.064150 waagent[1935]: 2025-11-08T00:24:35.064106Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 8 00:24:35.064830 waagent[1935]: 2025-11-08T00:24:35.064406Z INFO EnvHandler ExtHandler Gateway:None Nov 8 00:24:35.064830 waagent[1935]: 2025-11-08T00:24:35.064500Z INFO EnvHandler ExtHandler Routes:None Nov 8 00:24:35.073636 waagent[1935]: 2025-11-08T00:24:35.073592Z INFO ExtHandler ExtHandler Nov 8 00:24:35.073768 waagent[1935]: 2025-11-08T00:24:35.073698Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 0851ce20-d085-4938-8e4a-789d6f30203b correlation 0b15999f-753c-42d4-8364-3b8b48dd12dc created: 2025-11-08T00:23:06.259453Z] Nov 8 00:24:35.074201 waagent[1935]: 2025-11-08T00:24:35.074151Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 8 00:24:35.074708 waagent[1935]: 2025-11-08T00:24:35.074662Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 8 00:24:35.111300 waagent[1935]: 2025-11-08T00:24:35.111241Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BA335C53-3676-41CE-B7FC-AB1157DDCF6B;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 8 00:24:35.136200 waagent[1935]: 2025-11-08T00:24:35.136086Z INFO MonitorHandler ExtHandler Network interfaces: Nov 8 00:24:35.136200 waagent[1935]: Executing ['ip', '-a', '-o', 'link']: Nov 8 00:24:35.136200 waagent[1935]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 8 00:24:35.136200 waagent[1935]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:9a:c3 brd ff:ff:ff:ff:ff:ff Nov 8 00:24:35.136200 waagent[1935]: 3: enP2781s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:9a:c3 brd ff:ff:ff:ff:ff:ff\ altname enP2781p0s2 Nov 8 00:24:35.136200 waagent[1935]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 8 00:24:35.136200 waagent[1935]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 8 00:24:35.136200 waagent[1935]: 2: eth0 inet 10.200.8.42/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 8 00:24:35.136200 waagent[1935]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 8 00:24:35.136200 waagent[1935]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 8 00:24:35.136200 waagent[1935]: 2: eth0 inet6 fe80::20d:3aff:feb4:9ac3/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 8 00:24:35.212304 waagent[1935]: 2025-11-08T00:24:35.212221Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 8 00:24:35.212304 waagent[1935]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.212304 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.212304 waagent[1935]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.212304 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.212304 waagent[1935]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.212304 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.212304 waagent[1935]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:24:35.212304 waagent[1935]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:24:35.212304 waagent[1935]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:24:35.215691 waagent[1935]: 2025-11-08T00:24:35.215632Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 8 00:24:35.215691 waagent[1935]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.215691 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.215691 waagent[1935]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.215691 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.215691 waagent[1935]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 8 00:24:35.215691 waagent[1935]: pkts bytes target prot opt in out source destination Nov 8 00:24:35.215691 waagent[1935]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 8 00:24:35.215691 waagent[1935]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 8 00:24:35.215691 waagent[1935]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 8 00:24:35.216088 waagent[1935]: 2025-11-08T00:24:35.215973Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 8 00:24:42.604692 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 8 00:24:44.813967 update_engine[1698]: I20251108 00:24:44.813861 1698 update_attempter.cc:509] Updating boot flags... Nov 8 00:24:44.861104 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2191) Nov 8 00:24:44.929000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 8 00:24:44.946554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:44.981784 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (2190) Nov 8 00:24:45.547695 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:45.552386 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:45.589066 kubelet[2253]: E1108 00:24:45.589020 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:45.591433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:45.591660 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:52.413095 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:24:52.414481 systemd[1]: Started sshd@0-10.200.8.42:22-10.200.16.10:43878.service - OpenSSH per-connection server daemon (10.200.16.10:43878). Nov 8 00:24:53.084372 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 43878 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:53.085941 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:53.090052 systemd-logind[1697]: New session 3 of user core. Nov 8 00:24:53.096884 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:24:53.633636 systemd[1]: Started sshd@1-10.200.8.42:22-10.200.16.10:43894.service - OpenSSH per-connection server daemon (10.200.16.10:43894). Nov 8 00:24:54.257781 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 43894 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:54.259278 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:54.263712 systemd-logind[1697]: New session 4 of user core. Nov 8 00:24:54.269900 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:24:54.708079 sshd[2266]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:54.711017 systemd[1]: sshd@1-10.200.8.42:22-10.200.16.10:43894.service: Deactivated successfully. Nov 8 00:24:54.713134 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:24:54.714574 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:24:54.715659 systemd-logind[1697]: Removed session 4. Nov 8 00:24:54.818988 systemd[1]: Started sshd@2-10.200.8.42:22-10.200.16.10:43902.service - OpenSSH per-connection server daemon (10.200.16.10:43902). Nov 8 00:24:55.445660 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 43902 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:55.447162 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:55.452136 systemd-logind[1697]: New session 5 of user core. Nov 8 00:24:55.457882 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:24:55.789060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 8 00:24:55.801968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:24:55.891126 sshd[2273]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:55.893978 systemd[1]: sshd@2-10.200.8.42:22-10.200.16.10:43902.service: Deactivated successfully. Nov 8 00:24:55.895895 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:24:55.897365 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:24:55.898317 systemd-logind[1697]: Removed session 5. Nov 8 00:24:56.002608 systemd[1]: Started sshd@3-10.200.8.42:22-10.200.16.10:43918.service - OpenSSH per-connection server daemon (10.200.16.10:43918). Nov 8 00:24:56.157447 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:24:56.161806 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:24:56.202542 kubelet[2290]: E1108 00:24:56.202488 2290 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:24:56.205128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:24:56.205345 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:24:56.630961 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 43918 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:56.632424 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:56.637503 systemd-logind[1697]: New session 6 of user core. Nov 8 00:24:56.643906 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:24:57.087792 sshd[2283]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:57.091524 systemd[1]: sshd@3-10.200.8.42:22-10.200.16.10:43918.service: Deactivated successfully. Nov 8 00:24:57.093392 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:24:57.094118 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:24:57.095121 systemd-logind[1697]: Removed session 6. Nov 8 00:24:57.205930 systemd[1]: Started sshd@4-10.200.8.42:22-10.200.16.10:43920.service - OpenSSH per-connection server daemon (10.200.16.10:43920). Nov 8 00:24:57.831612 sshd[2302]: Accepted publickey for core from 10.200.16.10 port 43920 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:57.833129 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:57.838146 systemd-logind[1697]: New session 7 of user core. Nov 8 00:24:57.843892 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:24:58.341637 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:24:58.342041 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:58.368130 sudo[2305]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:58.470956 sshd[2302]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:58.474272 systemd[1]: sshd@4-10.200.8.42:22-10.200.16.10:43920.service: Deactivated successfully. Nov 8 00:24:58.476376 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:24:58.477994 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:24:58.478985 systemd-logind[1697]: Removed session 7. Nov 8 00:24:58.584900 systemd[1]: Started sshd@5-10.200.8.42:22-10.200.16.10:43936.service - OpenSSH per-connection server daemon (10.200.16.10:43936). Nov 8 00:24:59.221146 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 43936 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:24:59.222670 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:24:59.227666 systemd-logind[1697]: New session 8 of user core. Nov 8 00:24:59.232891 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:24:59.565335 sudo[2314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:24:59.565697 sudo[2314]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:59.568967 sudo[2314]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:59.573871 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 8 00:24:59.574215 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:24:59.587033 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:59.588597 auditctl[2317]: No rules Nov 8 00:24:59.588977 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:24:59.589178 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:59.592015 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 8 00:24:59.617761 augenrules[2335]: No rules Nov 8 00:24:59.619204 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 8 00:24:59.620504 sudo[2313]: pam_unix(sudo:session): session closed for user root Nov 8 00:24:59.726585 sshd[2310]: pam_unix(sshd:session): session closed for user core Nov 8 00:24:59.729606 systemd[1]: sshd@5-10.200.8.42:22-10.200.16.10:43936.service: Deactivated successfully. Nov 8 00:24:59.731549 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:24:59.733064 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:24:59.734200 systemd-logind[1697]: Removed session 8. Nov 8 00:24:59.837801 systemd[1]: Started sshd@6-10.200.8.42:22-10.200.16.10:43944.service - OpenSSH per-connection server daemon (10.200.16.10:43944). Nov 8 00:25:00.473315 sshd[2343]: Accepted publickey for core from 10.200.16.10 port 43944 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:25:00.474803 sshd[2343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:25:00.478779 systemd-logind[1697]: New session 9 of user core. Nov 8 00:25:00.487869 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:25:00.818806 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:25:00.819170 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:25:02.061097 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:25:02.062981 (dockerd)[2361]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:25:03.252015 dockerd[2361]: time="2025-11-08T00:25:03.251948672Z" level=info msg="Starting up" Nov 8 00:25:03.708791 dockerd[2361]: time="2025-11-08T00:25:03.708743087Z" level=info msg="Loading containers: start." Nov 8 00:25:03.874752 kernel: Initializing XFRM netlink socket Nov 8 00:25:03.997185 systemd-networkd[1352]: docker0: Link UP Nov 8 00:25:04.026069 dockerd[2361]: time="2025-11-08T00:25:04.026020374Z" level=info msg="Loading containers: done." Nov 8 00:25:04.074577 dockerd[2361]: time="2025-11-08T00:25:04.074516038Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:25:04.074846 dockerd[2361]: time="2025-11-08T00:25:04.074648339Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 8 00:25:04.074846 dockerd[2361]: time="2025-11-08T00:25:04.074791941Z" level=info msg="Daemon has completed initialization" Nov 8 00:25:04.134863 dockerd[2361]: time="2025-11-08T00:25:04.134317133Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:25:04.134446 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:25:04.929316 containerd[1730]: time="2025-11-08T00:25:04.929272771Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 8 00:25:05.681154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3658848817.mount: Deactivated successfully. Nov 8 00:25:06.289134 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 8 00:25:06.294969 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:06.984351 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:06.989371 (kubelet)[2549]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:07.026752 kubelet[2549]: E1108 00:25:07.026680 2549 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:07.029113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:07.029329 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:07.901969 containerd[1730]: time="2025-11-08T00:25:07.901917415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:07.904117 containerd[1730]: time="2025-11-08T00:25:07.904060339Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 8 00:25:07.909716 containerd[1730]: time="2025-11-08T00:25:07.909029997Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:07.913361 containerd[1730]: time="2025-11-08T00:25:07.913320647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:07.914410 containerd[1730]: time="2025-11-08T00:25:07.914370259Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.985059188s" Nov 8 00:25:07.914495 containerd[1730]: time="2025-11-08T00:25:07.914415460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 8 00:25:07.915438 containerd[1730]: time="2025-11-08T00:25:07.915278670Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 8 00:25:09.296337 containerd[1730]: time="2025-11-08T00:25:09.296279418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:09.299069 containerd[1730]: time="2025-11-08T00:25:09.298870048Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 8 00:25:09.301904 containerd[1730]: time="2025-11-08T00:25:09.301861183Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:09.307265 containerd[1730]: time="2025-11-08T00:25:09.307209145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:09.308643 containerd[1730]: time="2025-11-08T00:25:09.308214257Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.392642083s" Nov 8 00:25:09.308643 containerd[1730]: time="2025-11-08T00:25:09.308256357Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 8 00:25:09.309427 containerd[1730]: time="2025-11-08T00:25:09.309393270Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 8 00:25:10.410939 containerd[1730]: time="2025-11-08T00:25:10.410881470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:10.413949 containerd[1730]: time="2025-11-08T00:25:10.413882505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 8 00:25:10.419941 containerd[1730]: time="2025-11-08T00:25:10.419870975Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:10.425059 containerd[1730]: time="2025-11-08T00:25:10.424691431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:10.425816 containerd[1730]: time="2025-11-08T00:25:10.425776343Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.116259771s" Nov 8 00:25:10.425899 containerd[1730]: time="2025-11-08T00:25:10.425822544Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 8 00:25:10.426523 containerd[1730]: time="2025-11-08T00:25:10.426474451Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 8 00:25:11.626124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533886887.mount: Deactivated successfully. Nov 8 00:25:12.007076 containerd[1730]: time="2025-11-08T00:25:12.006941406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:12.010025 containerd[1730]: time="2025-11-08T00:25:12.009974750Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 8 00:25:12.012846 containerd[1730]: time="2025-11-08T00:25:12.012792690Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:12.016984 containerd[1730]: time="2025-11-08T00:25:12.016930550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:12.017952 containerd[1730]: time="2025-11-08T00:25:12.017483058Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.590962506s" Nov 8 00:25:12.017952 containerd[1730]: time="2025-11-08T00:25:12.017522759Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 8 00:25:12.018265 containerd[1730]: time="2025-11-08T00:25:12.018176768Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 8 00:25:12.578232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1468728558.mount: Deactivated successfully. Nov 8 00:25:13.937665 containerd[1730]: time="2025-11-08T00:25:13.937606406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:13.940323 containerd[1730]: time="2025-11-08T00:25:13.940105842Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 8 00:25:13.943252 containerd[1730]: time="2025-11-08T00:25:13.943190787Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:13.949070 containerd[1730]: time="2025-11-08T00:25:13.947921455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:13.949070 containerd[1730]: time="2025-11-08T00:25:13.948925469Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.9306536s" Nov 8 00:25:13.949070 containerd[1730]: time="2025-11-08T00:25:13.948962270Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 8 00:25:13.949694 containerd[1730]: time="2025-11-08T00:25:13.949668280Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 8 00:25:14.514405 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2221648782.mount: Deactivated successfully. Nov 8 00:25:14.533099 containerd[1730]: time="2025-11-08T00:25:14.533054811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:14.535798 containerd[1730]: time="2025-11-08T00:25:14.535752050Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 8 00:25:14.539708 containerd[1730]: time="2025-11-08T00:25:14.538513190Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:14.544515 containerd[1730]: time="2025-11-08T00:25:14.543331159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:14.544515 containerd[1730]: time="2025-11-08T00:25:14.544379374Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 594.679093ms" Nov 8 00:25:14.544515 containerd[1730]: time="2025-11-08T00:25:14.544415775Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 8 00:25:14.545262 containerd[1730]: time="2025-11-08T00:25:14.545230187Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 8 00:25:17.039253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Nov 8 00:25:17.046965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:17.207937 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:17.211748 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:25:17.816626 kubelet[2692]: E1108 00:25:17.816570 2692 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:25:17.819112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:25:17.819339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:25:18.052605 containerd[1730]: time="2025-11-08T00:25:18.052543871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:18.055126 containerd[1730]: time="2025-11-08T00:25:18.055060607Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 8 00:25:18.058243 containerd[1730]: time="2025-11-08T00:25:18.058188153Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:18.062802 containerd[1730]: time="2025-11-08T00:25:18.062744918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:18.063904 containerd[1730]: time="2025-11-08T00:25:18.063858334Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.518591847s" Nov 8 00:25:18.064010 containerd[1730]: time="2025-11-08T00:25:18.063905035Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 8 00:25:21.412976 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:21.426064 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:21.464959 systemd[1]: Reloading requested from client PID 2727 ('systemctl') (unit session-9.scope)... Nov 8 00:25:21.464984 systemd[1]: Reloading... Nov 8 00:25:21.616750 zram_generator::config[2770]: No configuration found. Nov 8 00:25:21.726661 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:21.807489 systemd[1]: Reloading finished in 341 ms. Nov 8 00:25:21.853944 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:25:21.854108 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:25:21.854403 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:21.857086 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:22.217413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:22.223849 (kubelet)[2836]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:22.261581 kubelet[2836]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:22.261581 kubelet[2836]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:22.262052 kubelet[2836]: I1108 00:25:22.261611 2836 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:23.236779 kubelet[2836]: I1108 00:25:23.236652 2836 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:25:23.236779 kubelet[2836]: I1108 00:25:23.236680 2836 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:23.236779 kubelet[2836]: I1108 00:25:23.236709 2836 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:25:23.236779 kubelet[2836]: I1108 00:25:23.236716 2836 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:23.237171 kubelet[2836]: I1108 00:25:23.237141 2836 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:23.245426 kubelet[2836]: I1108 00:25:23.245323 2836 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:23.245616 kubelet[2836]: E1108 00:25:23.245586 2836 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:23.255158 kubelet[2836]: E1108 00:25:23.255103 2836 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:23.255569 kubelet[2836]: I1108 00:25:23.255187 2836 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:23.258648 kubelet[2836]: I1108 00:25:23.258614 2836 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:25:23.259571 kubelet[2836]: I1108 00:25:23.259528 2836 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:23.259801 kubelet[2836]: I1108 00:25:23.259568 2836 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2742f1d4ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:23.259965 kubelet[2836]: I1108 00:25:23.259808 2836 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:23.259965 kubelet[2836]: I1108 00:25:23.259823 2836 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:25:23.259965 kubelet[2836]: I1108 00:25:23.259940 2836 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:25:23.264994 kubelet[2836]: I1108 00:25:23.264967 2836 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:23.266537 kubelet[2836]: I1108 00:25:23.266514 2836 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:25:23.266621 kubelet[2836]: I1108 00:25:23.266543 2836 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:23.266621 kubelet[2836]: I1108 00:25:23.266573 2836 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:25:23.266621 kubelet[2836]: I1108 00:25:23.266593 2836 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:23.270407 kubelet[2836]: E1108 00:25:23.269111 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:23.270407 kubelet[2836]: E1108 00:25:23.269236 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2742f1d4ae&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:23.270407 kubelet[2836]: I1108 00:25:23.269767 2836 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:23.270407 kubelet[2836]: I1108 00:25:23.270354 2836 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:23.270407 kubelet[2836]: I1108 00:25:23.270388 2836 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:25:23.270652 kubelet[2836]: W1108 00:25:23.270485 2836 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:25:23.273780 kubelet[2836]: I1108 00:25:23.273760 2836 server.go:1262] "Started kubelet" Nov 8 00:25:23.275305 kubelet[2836]: I1108 00:25:23.275277 2836 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:23.277745 kubelet[2836]: I1108 00:25:23.277485 2836 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:23.277745 kubelet[2836]: I1108 00:25:23.277531 2836 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:25:23.277745 kubelet[2836]: I1108 00:25:23.277591 2836 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:25:23.277901 kubelet[2836]: I1108 00:25:23.277891 2836 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:23.279646 kubelet[2836]: I1108 00:25:23.279624 2836 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:23.284441 kubelet[2836]: E1108 00:25:23.283060 2836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2742f1d4ae.1875e05df18050e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2742f1d4ae,UID:ci-4081.3.6-n-2742f1d4ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2742f1d4ae,},FirstTimestamp:2025-11-08 00:25:23.273715939 +0000 UTC m=+1.046200737,LastTimestamp:2025-11-08 00:25:23.273715939 +0000 UTC m=+1.046200737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2742f1d4ae,}" Nov 8 00:25:23.286335 kubelet[2836]: I1108 00:25:23.285954 2836 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:23.290174 kubelet[2836]: E1108 00:25:23.290150 2836 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" Nov 8 00:25:23.290476 kubelet[2836]: I1108 00:25:23.290461 2836 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:25:23.290839 kubelet[2836]: I1108 00:25:23.290821 2836 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:25:23.290987 kubelet[2836]: I1108 00:25:23.290975 2836 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:25:23.291516 kubelet[2836]: E1108 00:25:23.291492 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:23.292282 kubelet[2836]: I1108 00:25:23.292259 2836 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:23.292474 kubelet[2836]: I1108 00:25:23.292453 2836 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:23.292820 kubelet[2836]: E1108 00:25:23.292800 2836 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:23.293293 kubelet[2836]: E1108 00:25:23.293262 2836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2742f1d4ae?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="200ms" Nov 8 00:25:23.294117 kubelet[2836]: I1108 00:25:23.294098 2836 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:23.331638 kubelet[2836]: I1108 00:25:23.331586 2836 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:23.334380 kubelet[2836]: I1108 00:25:23.334177 2836 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:23.334380 kubelet[2836]: I1108 00:25:23.334197 2836 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:25:23.334380 kubelet[2836]: I1108 00:25:23.334223 2836 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:25:23.334380 kubelet[2836]: E1108 00:25:23.334269 2836 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:23.336966 kubelet[2836]: E1108 00:25:23.336940 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:23.338542 kubelet[2836]: I1108 00:25:23.338523 2836 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:23.338735 kubelet[2836]: I1108 00:25:23.338649 2836 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:23.338735 kubelet[2836]: I1108 00:25:23.338674 2836 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:23.340171 kubelet[2836]: E1108 00:25:23.340089 2836 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.42:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.42:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2742f1d4ae.1875e05df18050e3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2742f1d4ae,UID:ci-4081.3.6-n-2742f1d4ae,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2742f1d4ae,},FirstTimestamp:2025-11-08 00:25:23.273715939 +0000 UTC m=+1.046200737,LastTimestamp:2025-11-08 00:25:23.273715939 +0000 UTC m=+1.046200737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2742f1d4ae,}" Nov 8 00:25:23.344337 kubelet[2836]: I1108 00:25:23.344322 2836 policy_none.go:49] "None policy: Start" Nov 8 00:25:23.344428 kubelet[2836]: I1108 00:25:23.344420 2836 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:25:23.344494 kubelet[2836]: I1108 00:25:23.344474 2836 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:25:23.349446 kubelet[2836]: I1108 00:25:23.349433 2836 policy_none.go:47] "Start" Nov 8 00:25:23.354201 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:25:23.362584 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:25:23.366192 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:25:23.376478 kubelet[2836]: E1108 00:25:23.376455 2836 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:23.376836 kubelet[2836]: I1108 00:25:23.376821 2836 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:23.376955 kubelet[2836]: I1108 00:25:23.376921 2836 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:23.377311 kubelet[2836]: I1108 00:25:23.377296 2836 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:23.378932 kubelet[2836]: E1108 00:25:23.378913 2836 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:23.379067 kubelet[2836]: E1108 00:25:23.379021 2836 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2742f1d4ae\" not found" Nov 8 00:25:23.450413 systemd[1]: Created slice kubepods-burstable-pod154c8264fac9a31388f1b3f1a7acce1a.slice - libcontainer container kubepods-burstable-pod154c8264fac9a31388f1b3f1a7acce1a.slice. Nov 8 00:25:23.461453 kubelet[2836]: E1108 00:25:23.461403 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.466777 systemd[1]: Created slice kubepods-burstable-pod283f85d1b5d69bb380e941cdc29a5ff5.slice - libcontainer container kubepods-burstable-pod283f85d1b5d69bb380e941cdc29a5ff5.slice. Nov 8 00:25:23.469389 kubelet[2836]: E1108 00:25:23.469192 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.471174 systemd[1]: Created slice kubepods-burstable-podc15bdae5276a07f278c76cb6b29e0c08.slice - libcontainer container kubepods-burstable-podc15bdae5276a07f278c76cb6b29e0c08.slice. Nov 8 00:25:23.472915 kubelet[2836]: E1108 00:25:23.472893 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.479209 kubelet[2836]: I1108 00:25:23.479192 2836 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.479714 kubelet[2836]: E1108 00:25:23.479689 2836 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.492111 kubelet[2836]: I1108 00:25:23.492017 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.494315 kubelet[2836]: E1108 00:25:23.494286 2836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2742f1d4ae?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="400ms" Nov 8 00:25:23.592699 kubelet[2836]: I1108 00:25:23.592642 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592699 kubelet[2836]: I1108 00:25:23.592702 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592921 kubelet[2836]: I1108 00:25:23.592747 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/283f85d1b5d69bb380e941cdc29a5ff5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2742f1d4ae\" (UID: \"283f85d1b5d69bb380e941cdc29a5ff5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592921 kubelet[2836]: I1108 00:25:23.592772 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592921 kubelet[2836]: I1108 00:25:23.592791 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592921 kubelet[2836]: I1108 00:25:23.592811 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.592921 kubelet[2836]: I1108 00:25:23.592870 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.593135 kubelet[2836]: I1108 00:25:23.592893 2836 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.682546 kubelet[2836]: I1108 00:25:23.682505 2836 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.683028 kubelet[2836]: E1108 00:25:23.682996 2836 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:23.769203 containerd[1730]: time="2025-11-08T00:25:23.768889372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2742f1d4ae,Uid:154c8264fac9a31388f1b3f1a7acce1a,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:23.774263 containerd[1730]: time="2025-11-08T00:25:23.774223536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2742f1d4ae,Uid:283f85d1b5d69bb380e941cdc29a5ff5,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:23.780741 containerd[1730]: time="2025-11-08T00:25:23.780418211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2742f1d4ae,Uid:c15bdae5276a07f278c76cb6b29e0c08,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:23.895027 kubelet[2836]: E1108 00:25:23.894981 2836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2742f1d4ae?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="800ms" Nov 8 00:25:24.085396 kubelet[2836]: I1108 00:25:24.085341 2836 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:24.085831 kubelet[2836]: E1108 00:25:24.085787 2836 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:24.286528 kubelet[2836]: E1108 00:25:24.286488 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.42:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2742f1d4ae&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 8 00:25:24.367183 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3127944279.mount: Deactivated successfully. Nov 8 00:25:24.387785 containerd[1730]: time="2025-11-08T00:25:24.387712087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:24.391552 containerd[1730]: time="2025-11-08T00:25:24.391493233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 8 00:25:24.393853 containerd[1730]: time="2025-11-08T00:25:24.393813761Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:24.396278 containerd[1730]: time="2025-11-08T00:25:24.396238490Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:24.398610 containerd[1730]: time="2025-11-08T00:25:24.398506617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:24.401765 containerd[1730]: time="2025-11-08T00:25:24.401716355Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:24.403968 containerd[1730]: time="2025-11-08T00:25:24.403687179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 8 00:25:24.408020 containerd[1730]: time="2025-11-08T00:25:24.407987930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:25:24.408693 containerd[1730]: time="2025-11-08T00:25:24.408657538Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 634.354601ms" Nov 8 00:25:24.410903 containerd[1730]: time="2025-11-08T00:25:24.410870065Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 630.387454ms" Nov 8 00:25:24.411463 containerd[1730]: time="2025-11-08T00:25:24.411427972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 642.467899ms" Nov 8 00:25:24.416505 kubelet[2836]: E1108 00:25:24.416476 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.42:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 8 00:25:24.572481 kubelet[2836]: E1108 00:25:24.572437 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.42:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 8 00:25:24.681853 kubelet[2836]: E1108 00:25:24.681651 2836 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.42:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 8 00:25:24.695468 kubelet[2836]: E1108 00:25:24.695417 2836 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.42:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2742f1d4ae?timeout=10s\": dial tcp 10.200.8.42:6443: connect: connection refused" interval="1.6s" Nov 8 00:25:24.888678 kubelet[2836]: I1108 00:25:24.888631 2836 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:24.889235 kubelet[2836]: E1108 00:25:24.889094 2836 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.42:6443/api/v1/nodes\": dial tcp 10.200.8.42:6443: connect: connection refused" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:24.953997 containerd[1730]: time="2025-11-08T00:25:24.953160063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:24.953997 containerd[1730]: time="2025-11-08T00:25:24.953222464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:24.953997 containerd[1730]: time="2025-11-08T00:25:24.953249664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:24.953997 containerd[1730]: time="2025-11-08T00:25:24.953364365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:24.959410 containerd[1730]: time="2025-11-08T00:25:24.959275836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:24.959410 containerd[1730]: time="2025-11-08T00:25:24.959343537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:24.959410 containerd[1730]: time="2025-11-08T00:25:24.959361937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:24.959734 containerd[1730]: time="2025-11-08T00:25:24.959496439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:24.960843 containerd[1730]: time="2025-11-08T00:25:24.960707953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:24.960843 containerd[1730]: time="2025-11-08T00:25:24.960813255Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:24.961050 containerd[1730]: time="2025-11-08T00:25:24.960835655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:24.961050 containerd[1730]: time="2025-11-08T00:25:24.960955856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:25.002893 systemd[1]: Started cri-containerd-9dfd2d3fed64b263dc410cdf3a36fb64393d4410c2ce694287ff346d793cd470.scope - libcontainer container 9dfd2d3fed64b263dc410cdf3a36fb64393d4410c2ce694287ff346d793cd470. Nov 8 00:25:25.008791 systemd[1]: Started cri-containerd-73112fca15277f0f04f1c265f6b164df730cce23bf2867fa048710f9615f960d.scope - libcontainer container 73112fca15277f0f04f1c265f6b164df730cce23bf2867fa048710f9615f960d. Nov 8 00:25:25.011888 systemd[1]: Started cri-containerd-76737a14f5c50ce5b49ad06453ec950ed6bcd469903448949e6f26ee1b0dcda4.scope - libcontainer container 76737a14f5c50ce5b49ad06453ec950ed6bcd469903448949e6f26ee1b0dcda4. Nov 8 00:25:25.094172 containerd[1730]: time="2025-11-08T00:25:25.093982450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2742f1d4ae,Uid:c15bdae5276a07f278c76cb6b29e0c08,Namespace:kube-system,Attempt:0,} returns sandbox id \"73112fca15277f0f04f1c265f6b164df730cce23bf2867fa048710f9615f960d\"" Nov 8 00:25:25.097217 containerd[1730]: time="2025-11-08T00:25:25.096961686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2742f1d4ae,Uid:283f85d1b5d69bb380e941cdc29a5ff5,Namespace:kube-system,Attempt:0,} returns sandbox id \"9dfd2d3fed64b263dc410cdf3a36fb64393d4410c2ce694287ff346d793cd470\"" Nov 8 00:25:25.108772 containerd[1730]: time="2025-11-08T00:25:25.108734527Z" level=info msg="CreateContainer within sandbox \"9dfd2d3fed64b263dc410cdf3a36fb64393d4410c2ce694287ff346d793cd470\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:25:25.113537 containerd[1730]: time="2025-11-08T00:25:25.113426283Z" level=info msg="CreateContainer within sandbox \"73112fca15277f0f04f1c265f6b164df730cce23bf2867fa048710f9615f960d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:25:25.121184 containerd[1730]: time="2025-11-08T00:25:25.121146876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2742f1d4ae,Uid:154c8264fac9a31388f1b3f1a7acce1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"76737a14f5c50ce5b49ad06453ec950ed6bcd469903448949e6f26ee1b0dcda4\"" Nov 8 00:25:25.131415 containerd[1730]: time="2025-11-08T00:25:25.131372598Z" level=info msg="CreateContainer within sandbox \"76737a14f5c50ce5b49ad06453ec950ed6bcd469903448949e6f26ee1b0dcda4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:25:25.185064 containerd[1730]: time="2025-11-08T00:25:25.185005141Z" level=info msg="CreateContainer within sandbox \"9dfd2d3fed64b263dc410cdf3a36fb64393d4410c2ce694287ff346d793cd470\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8dff7c32b82eb61011f307291b3a3d8306ecfeb17db74b7625b31361007c5371\"" Nov 8 00:25:25.185911 containerd[1730]: time="2025-11-08T00:25:25.185873151Z" level=info msg="StartContainer for \"8dff7c32b82eb61011f307291b3a3d8306ecfeb17db74b7625b31361007c5371\"" Nov 8 00:25:25.198205 containerd[1730]: time="2025-11-08T00:25:25.198048097Z" level=info msg="CreateContainer within sandbox \"73112fca15277f0f04f1c265f6b164df730cce23bf2867fa048710f9615f960d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e57c742d7ff2ed0870e988d12a93d85bb6ab64c290ef01db3a6750cfd7aa6e14\"" Nov 8 00:25:25.199398 containerd[1730]: time="2025-11-08T00:25:25.198867307Z" level=info msg="StartContainer for \"e57c742d7ff2ed0870e988d12a93d85bb6ab64c290ef01db3a6750cfd7aa6e14\"" Nov 8 00:25:25.206369 containerd[1730]: time="2025-11-08T00:25:25.206250996Z" level=info msg="CreateContainer within sandbox \"76737a14f5c50ce5b49ad06453ec950ed6bcd469903448949e6f26ee1b0dcda4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"81c120c5f16f382ea5ebbfa7380e5721e3dbb1636bd9806deb51ec5341979d42\"" Nov 8 00:25:25.207038 containerd[1730]: time="2025-11-08T00:25:25.207009005Z" level=info msg="StartContainer for \"81c120c5f16f382ea5ebbfa7380e5721e3dbb1636bd9806deb51ec5341979d42\"" Nov 8 00:25:25.226038 systemd[1]: Started cri-containerd-8dff7c32b82eb61011f307291b3a3d8306ecfeb17db74b7625b31361007c5371.scope - libcontainer container 8dff7c32b82eb61011f307291b3a3d8306ecfeb17db74b7625b31361007c5371. Nov 8 00:25:25.256057 systemd[1]: Started cri-containerd-81c120c5f16f382ea5ebbfa7380e5721e3dbb1636bd9806deb51ec5341979d42.scope - libcontainer container 81c120c5f16f382ea5ebbfa7380e5721e3dbb1636bd9806deb51ec5341979d42. Nov 8 00:25:25.266151 systemd[1]: Started cri-containerd-e57c742d7ff2ed0870e988d12a93d85bb6ab64c290ef01db3a6750cfd7aa6e14.scope - libcontainer container e57c742d7ff2ed0870e988d12a93d85bb6ab64c290ef01db3a6750cfd7aa6e14. Nov 8 00:25:25.305785 kubelet[2836]: E1108 00:25:25.303700 2836 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.42:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.42:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 8 00:25:25.334565 containerd[1730]: time="2025-11-08T00:25:25.334521133Z" level=info msg="StartContainer for \"8dff7c32b82eb61011f307291b3a3d8306ecfeb17db74b7625b31361007c5371\" returns successfully" Nov 8 00:25:25.344048 containerd[1730]: time="2025-11-08T00:25:25.344007946Z" level=info msg="StartContainer for \"e57c742d7ff2ed0870e988d12a93d85bb6ab64c290ef01db3a6750cfd7aa6e14\" returns successfully" Nov 8 00:25:25.374159 kubelet[2836]: E1108 00:25:25.373990 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:25.381500 kubelet[2836]: E1108 00:25:25.381468 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:25.393990 containerd[1730]: time="2025-11-08T00:25:25.393946245Z" level=info msg="StartContainer for \"81c120c5f16f382ea5ebbfa7380e5721e3dbb1636bd9806deb51ec5341979d42\" returns successfully" Nov 8 00:25:27.359751 kubelet[2836]: I1108 00:25:27.359200 2836 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:27.367601 kubelet[2836]: E1108 00:25:27.366555 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:27.368748 kubelet[2836]: E1108 00:25:27.368397 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:27.369745 kubelet[2836]: E1108 00:25:27.369205 2836 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2742f1d4ae\" not found" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.140978 kubelet[2836]: I1108 00:25:28.140044 2836 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.194835 2836 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: E1108 00:25:28.208415 2836 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.208449 2836 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: E1108 00:25:28.212299 2836 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.212322 2836 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: E1108 00:25:28.213831 2836 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2742f1d4ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.353323 2836 apiserver.go:52] "Watching apiserver" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.366920 2836 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: E1108 00:25:28.368926 2836 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:28.409462 kubelet[2836]: I1108 00:25:28.392097 2836 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:25:29.543158 kubelet[2836]: I1108 00:25:29.542525 2836 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:29.550960 kubelet[2836]: I1108 00:25:29.550774 2836 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:29.909020 systemd[1]: Reloading requested from client PID 3124 ('systemctl') (unit session-9.scope)... Nov 8 00:25:29.909037 systemd[1]: Reloading... Nov 8 00:25:30.002714 zram_generator::config[3163]: No configuration found. Nov 8 00:25:30.137110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 8 00:25:30.230828 systemd[1]: Reloading finished in 321 ms. Nov 8 00:25:30.270263 kubelet[2836]: I1108 00:25:30.270061 2836 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:30.270394 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:30.286331 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:25:30.286633 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:30.286710 systemd[1]: kubelet.service: Consumed 1.418s CPU time, 126.7M memory peak, 0B memory swap peak. Nov 8 00:25:30.291089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:25:30.397435 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:25:30.411098 (kubelet)[3231]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:25:31.051263 kubelet[3231]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:25:31.051263 kubelet[3231]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.453607 3231 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.459429 3231 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.459446 3231 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.459466 3231 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.459472 3231 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.459707 3231 server.go:956] "Client rotation is on, will bootstrap in background" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.460731 3231 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.462667 3231 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:25:31.051263 kubelet[3231]: E1108 00:25:30.467829 3231 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.467877 3231 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.471234 3231 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 8 00:25:31.051263 kubelet[3231]: I1108 00:25:30.471442 3231 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:25:31.057945 kubelet[3231]: I1108 00:25:30.471469 3231 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2742f1d4ae","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:25:31.057945 kubelet[3231]: I1108 00:25:30.471591 3231 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:25:31.057945 kubelet[3231]: I1108 00:25:30.471599 3231 container_manager_linux.go:306] "Creating device plugin manager" Nov 8 00:25:31.057945 kubelet[3231]: I1108 00:25:30.471621 3231 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 8 00:25:31.057945 kubelet[3231]: I1108 00:25:31.045038 3231 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:31.058198 kubelet[3231]: I1108 00:25:31.045266 3231 kubelet.go:475] "Attempting to sync node with API server" Nov 8 00:25:31.058198 kubelet[3231]: I1108 00:25:31.045290 3231 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:25:31.058198 kubelet[3231]: I1108 00:25:31.045319 3231 kubelet.go:387] "Adding apiserver pod source" Nov 8 00:25:31.058198 kubelet[3231]: I1108 00:25:31.045339 3231 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:25:31.066801 kubelet[3231]: I1108 00:25:31.066414 3231 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 8 00:25:31.067525 kubelet[3231]: I1108 00:25:31.067362 3231 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 8 00:25:31.067525 kubelet[3231]: I1108 00:25:31.067422 3231 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 8 00:25:31.071920 kubelet[3231]: I1108 00:25:31.071893 3231 server.go:1262] "Started kubelet" Nov 8 00:25:31.075749 kubelet[3231]: I1108 00:25:31.074916 3231 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:25:31.076189 kubelet[3231]: I1108 00:25:31.076158 3231 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:25:31.078379 kubelet[3231]: I1108 00:25:31.077614 3231 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 8 00:25:31.078379 kubelet[3231]: I1108 00:25:31.078071 3231 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 8 00:25:31.078379 kubelet[3231]: I1108 00:25:31.078203 3231 reconciler.go:29] "Reconciler: start to sync state" Nov 8 00:25:31.080852 kubelet[3231]: I1108 00:25:31.078545 3231 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:25:31.080852 kubelet[3231]: I1108 00:25:31.078592 3231 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 8 00:25:31.080852 kubelet[3231]: I1108 00:25:31.079275 3231 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:25:31.081460 kubelet[3231]: I1108 00:25:31.081441 3231 server.go:310] "Adding debug handlers to kubelet server" Nov 8 00:25:31.085425 kubelet[3231]: I1108 00:25:31.085346 3231 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:25:31.092154 kubelet[3231]: I1108 00:25:31.091517 3231 factory.go:223] Registration of the systemd container factory successfully Nov 8 00:25:31.092242 kubelet[3231]: I1108 00:25:31.092172 3231 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:25:31.105698 kubelet[3231]: E1108 00:25:31.105665 3231 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:25:31.108693 kubelet[3231]: I1108 00:25:31.108469 3231 factory.go:223] Registration of the containerd container factory successfully Nov 8 00:25:31.125147 kubelet[3231]: I1108 00:25:31.125114 3231 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 8 00:25:31.128028 kubelet[3231]: I1108 00:25:31.127945 3231 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 8 00:25:31.128028 kubelet[3231]: I1108 00:25:31.127970 3231 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 8 00:25:31.128028 kubelet[3231]: I1108 00:25:31.128001 3231 kubelet.go:2427] "Starting kubelet main sync loop" Nov 8 00:25:31.128373 kubelet[3231]: E1108 00:25:31.128048 3231 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:25:31.178375 kubelet[3231]: I1108 00:25:31.178337 3231 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:25:31.178659 kubelet[3231]: I1108 00:25:31.178609 3231 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:25:31.178659 kubelet[3231]: I1108 00:25:31.178637 3231 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.178889 3231 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.178907 3231 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.178947 3231 policy_none.go:49] "None policy: Start" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.178961 3231 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.178974 3231 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.179129 3231 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 8 00:25:31.180121 kubelet[3231]: I1108 00:25:31.179141 3231 policy_none.go:47] "Start" Nov 8 00:25:31.187855 kubelet[3231]: E1108 00:25:31.187018 3231 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 8 00:25:31.187855 kubelet[3231]: I1108 00:25:31.187674 3231 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:25:31.187855 kubelet[3231]: I1108 00:25:31.187707 3231 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:25:31.190006 kubelet[3231]: I1108 00:25:31.189716 3231 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:25:31.195366 kubelet[3231]: E1108 00:25:31.194941 3231 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:25:31.229929 kubelet[3231]: I1108 00:25:31.229883 3231 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.230969 kubelet[3231]: I1108 00:25:31.230377 3231 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.230969 kubelet[3231]: I1108 00:25:31.230689 3231 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.243737 kubelet[3231]: I1108 00:25:31.243689 3231 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:31.247016 kubelet[3231]: I1108 00:25:31.246981 3231 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:31.248003 kubelet[3231]: I1108 00:25:31.247576 3231 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:31.248003 kubelet[3231]: E1108 00:25:31.247643 3231 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2742f1d4ae\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.301992 kubelet[3231]: I1108 00:25:31.301868 3231 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.317295 kubelet[3231]: I1108 00:25:31.316567 3231 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.317295 kubelet[3231]: I1108 00:25:31.316659 3231 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380194 kubelet[3231]: I1108 00:25:31.379831 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380194 kubelet[3231]: I1108 00:25:31.379890 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380194 kubelet[3231]: I1108 00:25:31.379923 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380194 kubelet[3231]: I1108 00:25:31.379962 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380194 kubelet[3231]: I1108 00:25:31.379997 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/283f85d1b5d69bb380e941cdc29a5ff5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2742f1d4ae\" (UID: \"283f85d1b5d69bb380e941cdc29a5ff5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380460 kubelet[3231]: I1108 00:25:31.380020 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380460 kubelet[3231]: I1108 00:25:31.380054 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/154c8264fac9a31388f1b3f1a7acce1a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2742f1d4ae\" (UID: \"154c8264fac9a31388f1b3f1a7acce1a\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380460 kubelet[3231]: I1108 00:25:31.380081 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:31.380460 kubelet[3231]: I1108 00:25:31.380102 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c15bdae5276a07f278c76cb6b29e0c08-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" (UID: \"c15bdae5276a07f278c76cb6b29e0c08\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:32.047032 kubelet[3231]: I1108 00:25:32.046691 3231 apiserver.go:52] "Watching apiserver" Nov 8 00:25:32.078407 kubelet[3231]: I1108 00:25:32.078370 3231 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 8 00:25:32.155924 kubelet[3231]: I1108 00:25:32.155896 3231 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:32.158424 kubelet[3231]: I1108 00:25:32.158392 3231 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:32.176763 kubelet[3231]: I1108 00:25:32.173680 3231 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:32.176763 kubelet[3231]: E1108 00:25:32.173786 3231 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2742f1d4ae\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:32.176763 kubelet[3231]: I1108 00:25:32.174028 3231 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 8 00:25:32.176763 kubelet[3231]: E1108 00:25:32.174081 3231 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2742f1d4ae\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" Nov 8 00:25:32.201257 kubelet[3231]: I1108 00:25:32.200812 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2742f1d4ae" podStartSLOduration=1.200792336 podStartE2EDuration="1.200792336s" podCreationTimestamp="2025-11-08 00:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:32.191493523 +0000 UTC m=+1.776033623" watchObservedRunningTime="2025-11-08 00:25:32.200792336 +0000 UTC m=+1.785332536" Nov 8 00:25:32.210824 kubelet[3231]: I1108 00:25:32.210764 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2742f1d4ae" podStartSLOduration=1.210748058 podStartE2EDuration="1.210748058s" podCreationTimestamp="2025-11-08 00:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:32.201141241 +0000 UTC m=+1.785681441" watchObservedRunningTime="2025-11-08 00:25:32.210748058 +0000 UTC m=+1.795288158" Nov 8 00:25:35.774142 kubelet[3231]: I1108 00:25:35.774107 3231 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:25:35.774793 kubelet[3231]: I1108 00:25:35.774710 3231 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:25:35.774978 containerd[1730]: time="2025-11-08T00:25:35.774493744Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:25:36.927806 kubelet[3231]: I1108 00:25:36.927606 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2742f1d4ae" podStartSLOduration=7.927583481 podStartE2EDuration="7.927583481s" podCreationTimestamp="2025-11-08 00:25:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:32.211641068 +0000 UTC m=+1.796181168" watchObservedRunningTime="2025-11-08 00:25:36.927583481 +0000 UTC m=+6.512123581" Nov 8 00:25:36.946060 systemd[1]: Created slice kubepods-besteffort-pod7c50bde7_b281_4563_9361_600e10a2011b.slice - libcontainer container kubepods-besteffort-pod7c50bde7_b281_4563_9361_600e10a2011b.slice. Nov 8 00:25:37.028637 kubelet[3231]: I1108 00:25:37.028406 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c50bde7-b281-4563-9361-600e10a2011b-xtables-lock\") pod \"kube-proxy-6pdq5\" (UID: \"7c50bde7-b281-4563-9361-600e10a2011b\") " pod="kube-system/kube-proxy-6pdq5" Nov 8 00:25:37.028637 kubelet[3231]: I1108 00:25:37.028454 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c50bde7-b281-4563-9361-600e10a2011b-lib-modules\") pod \"kube-proxy-6pdq5\" (UID: \"7c50bde7-b281-4563-9361-600e10a2011b\") " pod="kube-system/kube-proxy-6pdq5" Nov 8 00:25:37.028637 kubelet[3231]: I1108 00:25:37.028481 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7c50bde7-b281-4563-9361-600e10a2011b-kube-proxy\") pod \"kube-proxy-6pdq5\" (UID: \"7c50bde7-b281-4563-9361-600e10a2011b\") " pod="kube-system/kube-proxy-6pdq5" Nov 8 00:25:37.028637 kubelet[3231]: I1108 00:25:37.028506 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkpjb\" (UniqueName: \"kubernetes.io/projected/7c50bde7-b281-4563-9361-600e10a2011b-kube-api-access-vkpjb\") pod \"kube-proxy-6pdq5\" (UID: \"7c50bde7-b281-4563-9361-600e10a2011b\") " pod="kube-system/kube-proxy-6pdq5" Nov 8 00:25:37.044128 systemd[1]: Created slice kubepods-besteffort-pod7c0113b0_ed11_4956_86e2_26a93617c17f.slice - libcontainer container kubepods-besteffort-pod7c0113b0_ed11_4956_86e2_26a93617c17f.slice. Nov 8 00:25:37.130296 kubelet[3231]: I1108 00:25:37.128864 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7c0113b0-ed11-4956-86e2-26a93617c17f-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-5pmxb\" (UID: \"7c0113b0-ed11-4956-86e2-26a93617c17f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5pmxb" Nov 8 00:25:37.130296 kubelet[3231]: I1108 00:25:37.128926 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjt7r\" (UniqueName: \"kubernetes.io/projected/7c0113b0-ed11-4956-86e2-26a93617c17f-kube-api-access-mjt7r\") pod \"tigera-operator-65cdcdfd6d-5pmxb\" (UID: \"7c0113b0-ed11-4956-86e2-26a93617c17f\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5pmxb" Nov 8 00:25:37.262774 containerd[1730]: time="2025-11-08T00:25:37.262642060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pdq5,Uid:7c50bde7-b281-4563-9361-600e10a2011b,Namespace:kube-system,Attempt:0,}" Nov 8 00:25:37.301821 containerd[1730]: time="2025-11-08T00:25:37.301044527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:37.301821 containerd[1730]: time="2025-11-08T00:25:37.301715635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:37.301821 containerd[1730]: time="2025-11-08T00:25:37.301771536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:37.304839 containerd[1730]: time="2025-11-08T00:25:37.301879437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:37.327888 systemd[1]: Started cri-containerd-35342698bac2bd2c91a561b0b472c85ac22e2515185f61442788079ea1f8f437.scope - libcontainer container 35342698bac2bd2c91a561b0b472c85ac22e2515185f61442788079ea1f8f437. Nov 8 00:25:37.351066 containerd[1730]: time="2025-11-08T00:25:37.350945535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6pdq5,Uid:7c50bde7-b281-4563-9361-600e10a2011b,Namespace:kube-system,Attempt:0,} returns sandbox id \"35342698bac2bd2c91a561b0b472c85ac22e2515185f61442788079ea1f8f437\"" Nov 8 00:25:37.356727 containerd[1730]: time="2025-11-08T00:25:37.355953095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5pmxb,Uid:7c0113b0-ed11-4956-86e2-26a93617c17f,Namespace:tigera-operator,Attempt:0,}" Nov 8 00:25:37.367111 containerd[1730]: time="2025-11-08T00:25:37.367075431Z" level=info msg="CreateContainer within sandbox \"35342698bac2bd2c91a561b0b472c85ac22e2515185f61442788079ea1f8f437\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:25:37.409145 containerd[1730]: time="2025-11-08T00:25:37.409025242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:37.409400 containerd[1730]: time="2025-11-08T00:25:37.409136943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:37.409400 containerd[1730]: time="2025-11-08T00:25:37.409171843Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:37.410382 containerd[1730]: time="2025-11-08T00:25:37.410204256Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:37.426219 containerd[1730]: time="2025-11-08T00:25:37.426173250Z" level=info msg="CreateContainer within sandbox \"35342698bac2bd2c91a561b0b472c85ac22e2515185f61442788079ea1f8f437\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aed32a2f334f95f9bfe2683bdb1120bcb528b3db71e9344962bbc46df1aeab6a\"" Nov 8 00:25:37.428751 containerd[1730]: time="2025-11-08T00:25:37.427509667Z" level=info msg="StartContainer for \"aed32a2f334f95f9bfe2683bdb1120bcb528b3db71e9344962bbc46df1aeab6a\"" Nov 8 00:25:37.428086 systemd[1]: Started cri-containerd-1b88707d5cc8aec13f3d9b1732cd9625c6f35486f88a63da44a88cd7d17c30ed.scope - libcontainer container 1b88707d5cc8aec13f3d9b1732cd9625c6f35486f88a63da44a88cd7d17c30ed. Nov 8 00:25:37.466927 systemd[1]: Started cri-containerd-aed32a2f334f95f9bfe2683bdb1120bcb528b3db71e9344962bbc46df1aeab6a.scope - libcontainer container aed32a2f334f95f9bfe2683bdb1120bcb528b3db71e9344962bbc46df1aeab6a. Nov 8 00:25:37.486791 containerd[1730]: time="2025-11-08T00:25:37.486707787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5pmxb,Uid:7c0113b0-ed11-4956-86e2-26a93617c17f,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b88707d5cc8aec13f3d9b1732cd9625c6f35486f88a63da44a88cd7d17c30ed\"" Nov 8 00:25:37.491409 containerd[1730]: time="2025-11-08T00:25:37.491275943Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 8 00:25:37.512351 containerd[1730]: time="2025-11-08T00:25:37.511854693Z" level=info msg="StartContainer for \"aed32a2f334f95f9bfe2683bdb1120bcb528b3db71e9344962bbc46df1aeab6a\" returns successfully" Nov 8 00:25:39.147352 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564402328.mount: Deactivated successfully. Nov 8 00:25:39.933954 containerd[1730]: time="2025-11-08T00:25:39.933903178Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:39.936361 containerd[1730]: time="2025-11-08T00:25:39.936186205Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 8 00:25:39.940158 containerd[1730]: time="2025-11-08T00:25:39.938861438Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:39.944230 containerd[1730]: time="2025-11-08T00:25:39.943017389Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:39.944230 containerd[1730]: time="2025-11-08T00:25:39.944045201Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.452714458s" Nov 8 00:25:39.944230 containerd[1730]: time="2025-11-08T00:25:39.944076401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 8 00:25:39.951596 containerd[1730]: time="2025-11-08T00:25:39.951567593Z" level=info msg="CreateContainer within sandbox \"1b88707d5cc8aec13f3d9b1732cd9625c6f35486f88a63da44a88cd7d17c30ed\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 8 00:25:39.982103 containerd[1730]: time="2025-11-08T00:25:39.982048964Z" level=info msg="CreateContainer within sandbox \"1b88707d5cc8aec13f3d9b1732cd9625c6f35486f88a63da44a88cd7d17c30ed\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9e90e8988175bf2d586390308a9d88ba130896cd3931dd1febbb0cb3c0dcf951\"" Nov 8 00:25:39.982797 containerd[1730]: time="2025-11-08T00:25:39.982749572Z" level=info msg="StartContainer for \"9e90e8988175bf2d586390308a9d88ba130896cd3931dd1febbb0cb3c0dcf951\"" Nov 8 00:25:40.013898 systemd[1]: Started cri-containerd-9e90e8988175bf2d586390308a9d88ba130896cd3931dd1febbb0cb3c0dcf951.scope - libcontainer container 9e90e8988175bf2d586390308a9d88ba130896cd3931dd1febbb0cb3c0dcf951. Nov 8 00:25:40.039614 containerd[1730]: time="2025-11-08T00:25:40.039568964Z" level=info msg="StartContainer for \"9e90e8988175bf2d586390308a9d88ba130896cd3931dd1febbb0cb3c0dcf951\" returns successfully" Nov 8 00:25:40.183778 kubelet[3231]: I1108 00:25:40.183691 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6pdq5" podStartSLOduration=4.183671818 podStartE2EDuration="4.183671818s" podCreationTimestamp="2025-11-08 00:25:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:25:38.180178329 +0000 UTC m=+7.764718529" watchObservedRunningTime="2025-11-08 00:25:40.183671818 +0000 UTC m=+9.768211918" Nov 8 00:25:41.043445 kubelet[3231]: I1108 00:25:41.043333 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-5pmxb" podStartSLOduration=1.587570789 podStartE2EDuration="4.043315683s" podCreationTimestamp="2025-11-08 00:25:37 +0000 UTC" firstStartedPulling="2025-11-08 00:25:37.489453621 +0000 UTC m=+7.073993721" lastFinishedPulling="2025-11-08 00:25:39.945198515 +0000 UTC m=+9.529738615" observedRunningTime="2025-11-08 00:25:40.184564129 +0000 UTC m=+9.769104329" watchObservedRunningTime="2025-11-08 00:25:41.043315683 +0000 UTC m=+10.627855783" Nov 8 00:25:46.379897 sudo[2346]: pam_unix(sudo:session): session closed for user root Nov 8 00:25:46.490530 sshd[2343]: pam_unix(sshd:session): session closed for user core Nov 8 00:25:46.494933 systemd[1]: sshd@6-10.200.8.42:22-10.200.16.10:43944.service: Deactivated successfully. Nov 8 00:25:46.499469 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:25:46.499956 systemd[1]: session-9.scope: Consumed 5.238s CPU time, 158.3M memory peak, 0B memory swap peak. Nov 8 00:25:46.501701 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:25:46.503377 systemd-logind[1697]: Removed session 9. Nov 8 00:25:52.014911 systemd[1]: Created slice kubepods-besteffort-pod0c518c02_1a19_4a86_b362_f784416f07e1.slice - libcontainer container kubepods-besteffort-pod0c518c02_1a19_4a86_b362_f784416f07e1.slice. Nov 8 00:25:52.034752 kubelet[3231]: I1108 00:25:52.034215 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxtv\" (UniqueName: \"kubernetes.io/projected/0c518c02-1a19-4a86-b362-f784416f07e1-kube-api-access-pzxtv\") pod \"calico-typha-6947dd6656-hnvfk\" (UID: \"0c518c02-1a19-4a86-b362-f784416f07e1\") " pod="calico-system/calico-typha-6947dd6656-hnvfk" Nov 8 00:25:52.034752 kubelet[3231]: I1108 00:25:52.034267 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c518c02-1a19-4a86-b362-f784416f07e1-tigera-ca-bundle\") pod \"calico-typha-6947dd6656-hnvfk\" (UID: \"0c518c02-1a19-4a86-b362-f784416f07e1\") " pod="calico-system/calico-typha-6947dd6656-hnvfk" Nov 8 00:25:52.038159 kubelet[3231]: I1108 00:25:52.034290 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/0c518c02-1a19-4a86-b362-f784416f07e1-typha-certs\") pod \"calico-typha-6947dd6656-hnvfk\" (UID: \"0c518c02-1a19-4a86-b362-f784416f07e1\") " pod="calico-system/calico-typha-6947dd6656-hnvfk" Nov 8 00:25:52.195947 systemd[1]: Created slice kubepods-besteffort-pod72f30cbf_5144_47e2_8f1c_a93925fcb5ef.slice - libcontainer container kubepods-besteffort-pod72f30cbf_5144_47e2_8f1c_a93925fcb5ef.slice. Nov 8 00:25:52.237538 kubelet[3231]: I1108 00:25:52.237484 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-var-run-calico\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237538 kubelet[3231]: I1108 00:25:52.237531 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-var-lib-calico\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237779 kubelet[3231]: I1108 00:25:52.237552 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-node-certs\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237779 kubelet[3231]: I1108 00:25:52.237570 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-policysync\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237779 kubelet[3231]: I1108 00:25:52.237589 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-cni-bin-dir\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237779 kubelet[3231]: I1108 00:25:52.237607 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-cni-net-dir\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237779 kubelet[3231]: I1108 00:25:52.237643 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-lib-modules\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237987 kubelet[3231]: I1108 00:25:52.237664 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-cni-log-dir\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237987 kubelet[3231]: I1108 00:25:52.237685 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-tigera-ca-bundle\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237987 kubelet[3231]: I1108 00:25:52.237705 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-flexvol-driver-host\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237987 kubelet[3231]: I1108 00:25:52.237753 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-xtables-lock\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.237987 kubelet[3231]: I1108 00:25:52.237774 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47vnp\" (UniqueName: \"kubernetes.io/projected/72f30cbf-5144-47e2-8f1c-a93925fcb5ef-kube-api-access-47vnp\") pod \"calico-node-jz75c\" (UID: \"72f30cbf-5144-47e2-8f1c-a93925fcb5ef\") " pod="calico-system/calico-node-jz75c" Nov 8 00:25:52.344797 containerd[1730]: time="2025-11-08T00:25:52.343880231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6947dd6656-hnvfk,Uid:0c518c02-1a19-4a86-b362-f784416f07e1,Namespace:calico-system,Attempt:0,}" Nov 8 00:25:52.347301 kubelet[3231]: E1108 00:25:52.346972 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.347301 kubelet[3231]: W1108 00:25:52.346991 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.347301 kubelet[3231]: E1108 00:25:52.347016 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.348827 kubelet[3231]: E1108 00:25:52.347692 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.348827 kubelet[3231]: W1108 00:25:52.347706 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.348827 kubelet[3231]: E1108 00:25:52.347836 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.355363 kubelet[3231]: E1108 00:25:52.355336 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.355363 kubelet[3231]: W1108 00:25:52.355359 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.355500 kubelet[3231]: E1108 00:25:52.355377 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.363762 kubelet[3231]: E1108 00:25:52.361804 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.363762 kubelet[3231]: W1108 00:25:52.361826 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.363762 kubelet[3231]: E1108 00:25:52.361844 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.402193 kubelet[3231]: E1108 00:25:52.401786 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:25:52.412891 containerd[1730]: time="2025-11-08T00:25:52.412149940Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:52.412891 containerd[1730]: time="2025-11-08T00:25:52.412239242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:52.412891 containerd[1730]: time="2025-11-08T00:25:52.412332743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:52.412891 containerd[1730]: time="2025-11-08T00:25:52.412457945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:52.432323 kubelet[3231]: E1108 00:25:52.432148 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.432323 kubelet[3231]: W1108 00:25:52.432319 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.432543 kubelet[3231]: E1108 00:25:52.432348 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.432702 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435294 kubelet[3231]: W1108 00:25:52.432736 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.432756 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.433032 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435294 kubelet[3231]: W1108 00:25:52.433044 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.433074 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.433393 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435294 kubelet[3231]: W1108 00:25:52.433406 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.433432 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435294 kubelet[3231]: E1108 00:25:52.433710 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435797 kubelet[3231]: W1108 00:25:52.433743 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.433758 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.434083 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435797 kubelet[3231]: W1108 00:25:52.434095 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.434126 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.434374 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435797 kubelet[3231]: W1108 00:25:52.434386 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.434413 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.435797 kubelet[3231]: E1108 00:25:52.434651 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.435797 kubelet[3231]: W1108 00:25:52.434663 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.434677 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.434945 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.436221 kubelet[3231]: W1108 00:25:52.434957 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.435222 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.435646 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.436221 kubelet[3231]: W1108 00:25:52.435659 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.435745 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.436040 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.436221 kubelet[3231]: W1108 00:25:52.436051 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436221 kubelet[3231]: E1108 00:25:52.436065 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.436635 kubelet[3231]: E1108 00:25:52.436308 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.436635 kubelet[3231]: W1108 00:25:52.436318 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436635 kubelet[3231]: E1108 00:25:52.436331 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.436635 kubelet[3231]: E1108 00:25:52.436595 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.436635 kubelet[3231]: W1108 00:25:52.436607 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.436635 kubelet[3231]: E1108 00:25:52.436619 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.438508 kubelet[3231]: E1108 00:25:52.437751 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.438508 kubelet[3231]: W1108 00:25:52.437768 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.438508 kubelet[3231]: E1108 00:25:52.437783 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.438508 kubelet[3231]: E1108 00:25:52.438274 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.438508 kubelet[3231]: W1108 00:25:52.438288 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.438508 kubelet[3231]: E1108 00:25:52.438303 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440203 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.441329 kubelet[3231]: W1108 00:25:52.440217 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440234 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440540 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.441329 kubelet[3231]: W1108 00:25:52.440552 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440565 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440949 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.441329 kubelet[3231]: W1108 00:25:52.440961 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.441329 kubelet[3231]: E1108 00:25:52.440975 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.442326 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.444749 kubelet[3231]: W1108 00:25:52.442345 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.442360 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.443203 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.444749 kubelet[3231]: W1108 00:25:52.443216 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.443231 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.443926 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.444749 kubelet[3231]: W1108 00:25:52.443939 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.444749 kubelet[3231]: E1108 00:25:52.443954 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.445162 kubelet[3231]: I1108 00:25:52.444390 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77eae253-1bce-4de5-8e9b-23a9c58b4ee0-varrun\") pod \"csi-node-driver-kbfws\" (UID: \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\") " pod="calico-system/csi-node-driver-kbfws" Nov 8 00:25:52.445162 kubelet[3231]: E1108 00:25:52.444661 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.445162 kubelet[3231]: W1108 00:25:52.444674 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.445162 kubelet[3231]: E1108 00:25:52.444690 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.445162 kubelet[3231]: I1108 00:25:52.445138 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77eae253-1bce-4de5-8e9b-23a9c58b4ee0-kubelet-dir\") pod \"csi-node-driver-kbfws\" (UID: \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\") " pod="calico-system/csi-node-driver-kbfws" Nov 8 00:25:52.446462 kubelet[3231]: E1108 00:25:52.446136 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.446462 kubelet[3231]: W1108 00:25:52.446219 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.446462 kubelet[3231]: E1108 00:25:52.446236 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.448869 kubelet[3231]: E1108 00:25:52.447611 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.448869 kubelet[3231]: W1108 00:25:52.447630 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.448869 kubelet[3231]: E1108 00:25:52.447645 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.449115 kubelet[3231]: E1108 00:25:52.449061 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.449115 kubelet[3231]: W1108 00:25:52.449077 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.449115 kubelet[3231]: E1108 00:25:52.449091 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.449645 kubelet[3231]: I1108 00:25:52.449614 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77eae253-1bce-4de5-8e9b-23a9c58b4ee0-socket-dir\") pod \"csi-node-driver-kbfws\" (UID: \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\") " pod="calico-system/csi-node-driver-kbfws" Nov 8 00:25:52.450490 kubelet[3231]: E1108 00:25:52.450469 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.450490 kubelet[3231]: W1108 00:25:52.450488 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.450611 kubelet[3231]: E1108 00:25:52.450504 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.452175 kubelet[3231]: E1108 00:25:52.452151 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.452175 kubelet[3231]: W1108 00:25:52.452173 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.452308 kubelet[3231]: E1108 00:25:52.452189 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.452514 kubelet[3231]: E1108 00:25:52.452498 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.452598 kubelet[3231]: W1108 00:25:52.452586 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.452921 kubelet[3231]: E1108 00:25:52.452747 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.453001 kubelet[3231]: I1108 00:25:52.452916 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5kkh\" (UniqueName: \"kubernetes.io/projected/77eae253-1bce-4de5-8e9b-23a9c58b4ee0-kube-api-access-n5kkh\") pod \"csi-node-driver-kbfws\" (UID: \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\") " pod="calico-system/csi-node-driver-kbfws" Nov 8 00:25:52.454816 kubelet[3231]: E1108 00:25:52.454795 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.454816 kubelet[3231]: W1108 00:25:52.454817 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.455076 kubelet[3231]: E1108 00:25:52.454833 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.456172 kubelet[3231]: E1108 00:25:52.455958 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.456172 kubelet[3231]: W1108 00:25:52.455974 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.456172 kubelet[3231]: E1108 00:25:52.455989 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.456379 kubelet[3231]: E1108 00:25:52.456367 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.456455 kubelet[3231]: W1108 00:25:52.456442 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.456530 kubelet[3231]: E1108 00:25:52.456518 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.456632 kubelet[3231]: I1108 00:25:52.456607 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77eae253-1bce-4de5-8e9b-23a9c58b4ee0-registration-dir\") pod \"csi-node-driver-kbfws\" (UID: \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\") " pod="calico-system/csi-node-driver-kbfws" Nov 8 00:25:52.457647 kubelet[3231]: E1108 00:25:52.457188 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.457647 kubelet[3231]: W1108 00:25:52.457204 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.457647 kubelet[3231]: E1108 00:25:52.457220 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.457647 kubelet[3231]: E1108 00:25:52.457489 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.457647 kubelet[3231]: W1108 00:25:52.457518 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.457647 kubelet[3231]: E1108 00:25:52.457533 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.459337 kubelet[3231]: E1108 00:25:52.457906 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.459337 kubelet[3231]: W1108 00:25:52.457918 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.459337 kubelet[3231]: E1108 00:25:52.457933 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.459337 kubelet[3231]: E1108 00:25:52.458668 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.459337 kubelet[3231]: W1108 00:25:52.458680 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.459337 kubelet[3231]: E1108 00:25:52.458694 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.460711 systemd[1]: Started cri-containerd-054137d54d9b9c77d961314d0eda402844eaf22ed7c0ad33ec9edb378d9fa3fc.scope - libcontainer container 054137d54d9b9c77d961314d0eda402844eaf22ed7c0ad33ec9edb378d9fa3fc. Nov 8 00:25:52.505181 containerd[1730]: time="2025-11-08T00:25:52.505128115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz75c,Uid:72f30cbf-5144-47e2-8f1c-a93925fcb5ef,Namespace:calico-system,Attempt:0,}" Nov 8 00:25:52.553702 containerd[1730]: time="2025-11-08T00:25:52.553538631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6947dd6656-hnvfk,Uid:0c518c02-1a19-4a86-b362-f784416f07e1,Namespace:calico-system,Attempt:0,} returns sandbox id \"054137d54d9b9c77d961314d0eda402844eaf22ed7c0ad33ec9edb378d9fa3fc\"" Nov 8 00:25:52.557405 containerd[1730]: time="2025-11-08T00:25:52.557220886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 8 00:25:52.559154 kubelet[3231]: E1108 00:25:52.558822 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.559154 kubelet[3231]: W1108 00:25:52.558957 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.559154 kubelet[3231]: E1108 00:25:52.558982 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.560106 kubelet[3231]: E1108 00:25:52.559903 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.560106 kubelet[3231]: W1108 00:25:52.559925 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.560106 kubelet[3231]: E1108 00:25:52.559943 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.560906 kubelet[3231]: E1108 00:25:52.560796 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.560906 kubelet[3231]: W1108 00:25:52.560829 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.560906 kubelet[3231]: E1108 00:25:52.560852 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.561776 kubelet[3231]: E1108 00:25:52.561127 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.561776 kubelet[3231]: W1108 00:25:52.561155 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.561776 kubelet[3231]: E1108 00:25:52.561169 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.561776 kubelet[3231]: E1108 00:25:52.561582 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.561969 kubelet[3231]: W1108 00:25:52.561795 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.561969 kubelet[3231]: E1108 00:25:52.561814 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.562972 kubelet[3231]: E1108 00:25:52.562476 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.562972 kubelet[3231]: W1108 00:25:52.562494 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.562972 kubelet[3231]: E1108 00:25:52.562519 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.565126 kubelet[3231]: E1108 00:25:52.565024 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.565126 kubelet[3231]: W1108 00:25:52.565058 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.565126 kubelet[3231]: E1108 00:25:52.565076 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.566382 kubelet[3231]: E1108 00:25:52.566268 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.566382 kubelet[3231]: W1108 00:25:52.566297 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.566382 kubelet[3231]: E1108 00:25:52.566313 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.566643 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.567664 kubelet[3231]: W1108 00:25:52.566658 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.566680 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.566982 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.567664 kubelet[3231]: W1108 00:25:52.566992 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.567005 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.567265 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.567664 kubelet[3231]: W1108 00:25:52.567276 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.567664 kubelet[3231]: E1108 00:25:52.567288 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.568743 kubelet[3231]: E1108 00:25:52.568477 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.568743 kubelet[3231]: W1108 00:25:52.568498 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.568743 kubelet[3231]: E1108 00:25:52.568520 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.569751 kubelet[3231]: E1108 00:25:52.569025 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.569751 kubelet[3231]: W1108 00:25:52.569045 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.569751 kubelet[3231]: E1108 00:25:52.569063 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.570447 kubelet[3231]: E1108 00:25:52.570335 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.570447 kubelet[3231]: W1108 00:25:52.570350 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.570447 kubelet[3231]: E1108 00:25:52.570365 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.571634 kubelet[3231]: E1108 00:25:52.571531 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.571634 kubelet[3231]: W1108 00:25:52.571551 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.571634 kubelet[3231]: E1108 00:25:52.571567 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.573451 kubelet[3231]: E1108 00:25:52.571904 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.573451 kubelet[3231]: W1108 00:25:52.571917 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.573451 kubelet[3231]: E1108 00:25:52.571951 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.573451 kubelet[3231]: E1108 00:25:52.572278 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.573451 kubelet[3231]: W1108 00:25:52.572290 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.573451 kubelet[3231]: E1108 00:25:52.572405 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.573992 kubelet[3231]: E1108 00:25:52.573973 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.574080 kubelet[3231]: W1108 00:25:52.573994 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.574080 kubelet[3231]: E1108 00:25:52.574008 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.574554 kubelet[3231]: E1108 00:25:52.574505 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.574554 kubelet[3231]: W1108 00:25:52.574542 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.575172 kubelet[3231]: E1108 00:25:52.574557 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.576942 kubelet[3231]: E1108 00:25:52.576921 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.577039 kubelet[3231]: W1108 00:25:52.576944 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.577039 kubelet[3231]: E1108 00:25:52.576959 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.577627 kubelet[3231]: E1108 00:25:52.577607 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.577712 kubelet[3231]: W1108 00:25:52.577628 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.577712 kubelet[3231]: E1108 00:25:52.577643 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.578823 kubelet[3231]: E1108 00:25:52.578192 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.578823 kubelet[3231]: W1108 00:25:52.578212 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.578823 kubelet[3231]: E1108 00:25:52.578227 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.579175 kubelet[3231]: E1108 00:25:52.579156 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.579175 kubelet[3231]: W1108 00:25:52.579175 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.579300 kubelet[3231]: E1108 00:25:52.579190 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.580178 kubelet[3231]: E1108 00:25:52.580153 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.580178 kubelet[3231]: W1108 00:25:52.580174 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.580306 kubelet[3231]: E1108 00:25:52.580191 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.582746 kubelet[3231]: E1108 00:25:52.581158 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.582746 kubelet[3231]: W1108 00:25:52.581175 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.582746 kubelet[3231]: E1108 00:25:52.581190 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.586887 containerd[1730]: time="2025-11-08T00:25:52.586481419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:25:52.586887 containerd[1730]: time="2025-11-08T00:25:52.586542419Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:25:52.586887 containerd[1730]: time="2025-11-08T00:25:52.586558320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:52.586887 containerd[1730]: time="2025-11-08T00:25:52.586641121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:25:52.622194 kubelet[3231]: E1108 00:25:52.622106 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:52.622358 kubelet[3231]: W1108 00:25:52.622338 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:52.622450 kubelet[3231]: E1108 00:25:52.622437 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:52.626372 systemd[1]: Started cri-containerd-8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db.scope - libcontainer container 8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db. Nov 8 00:25:52.710842 containerd[1730]: time="2025-11-08T00:25:52.710508653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jz75c,Uid:72f30cbf-5144-47e2-8f1c-a93925fcb5ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\"" Nov 8 00:25:53.928886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185935353.mount: Deactivated successfully. Nov 8 00:25:54.128747 kubelet[3231]: E1108 00:25:54.128687 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:25:55.338236 containerd[1730]: time="2025-11-08T00:25:55.338178515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:55.340790 containerd[1730]: time="2025-11-08T00:25:55.340620451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 8 00:25:55.344475 containerd[1730]: time="2025-11-08T00:25:55.343326691Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:55.347827 containerd[1730]: time="2025-11-08T00:25:55.347790557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:55.348505 containerd[1730]: time="2025-11-08T00:25:55.348472567Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.791206281s" Nov 8 00:25:55.348632 containerd[1730]: time="2025-11-08T00:25:55.348611469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 8 00:25:55.349586 containerd[1730]: time="2025-11-08T00:25:55.349555883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 8 00:25:55.377404 containerd[1730]: time="2025-11-08T00:25:55.377366894Z" level=info msg="CreateContainer within sandbox \"054137d54d9b9c77d961314d0eda402844eaf22ed7c0ad33ec9edb378d9fa3fc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 8 00:25:55.402421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount964771531.mount: Deactivated successfully. Nov 8 00:25:55.412504 containerd[1730]: time="2025-11-08T00:25:55.412461413Z" level=info msg="CreateContainer within sandbox \"054137d54d9b9c77d961314d0eda402844eaf22ed7c0ad33ec9edb378d9fa3fc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c04e2efcffd84342a8104b45cd0bab5a644f96c9eb193ba788ef71ed42f84dcc\"" Nov 8 00:25:55.414299 containerd[1730]: time="2025-11-08T00:25:55.413088323Z" level=info msg="StartContainer for \"c04e2efcffd84342a8104b45cd0bab5a644f96c9eb193ba788ef71ed42f84dcc\"" Nov 8 00:25:55.445949 systemd[1]: Started cri-containerd-c04e2efcffd84342a8104b45cd0bab5a644f96c9eb193ba788ef71ed42f84dcc.scope - libcontainer container c04e2efcffd84342a8104b45cd0bab5a644f96c9eb193ba788ef71ed42f84dcc. Nov 8 00:25:55.495812 containerd[1730]: time="2025-11-08T00:25:55.495758245Z" level=info msg="StartContainer for \"c04e2efcffd84342a8104b45cd0bab5a644f96c9eb193ba788ef71ed42f84dcc\" returns successfully" Nov 8 00:25:56.128912 kubelet[3231]: E1108 00:25:56.128656 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:25:56.268491 kubelet[3231]: E1108 00:25:56.267817 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.268491 kubelet[3231]: W1108 00:25:56.267847 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.268491 kubelet[3231]: E1108 00:25:56.268179 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.268491 kubelet[3231]: E1108 00:25:56.268470 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.268491 kubelet[3231]: W1108 00:25:56.268485 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.268491 kubelet[3231]: E1108 00:25:56.268502 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.268901 kubelet[3231]: E1108 00:25:56.268876 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.268901 kubelet[3231]: W1108 00:25:56.268890 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.268994 kubelet[3231]: E1108 00:25:56.268905 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.269301 kubelet[3231]: E1108 00:25:56.269280 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.269391 kubelet[3231]: W1108 00:25:56.269295 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.269391 kubelet[3231]: E1108 00:25:56.269317 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.269598 kubelet[3231]: E1108 00:25:56.269582 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.269598 kubelet[3231]: W1108 00:25:56.269596 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.269841 kubelet[3231]: E1108 00:25:56.269610 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.269841 kubelet[3231]: E1108 00:25:56.269835 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.269928 kubelet[3231]: W1108 00:25:56.269847 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.269928 kubelet[3231]: E1108 00:25:56.269861 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.270059 kubelet[3231]: E1108 00:25:56.270039 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.270059 kubelet[3231]: W1108 00:25:56.270055 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.270259 kubelet[3231]: E1108 00:25:56.270068 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.270342 kubelet[3231]: E1108 00:25:56.270326 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.270342 kubelet[3231]: W1108 00:25:56.270337 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.270483 kubelet[3231]: E1108 00:25:56.270350 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.270566 kubelet[3231]: E1108 00:25:56.270556 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.270625 kubelet[3231]: W1108 00:25:56.270567 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.270625 kubelet[3231]: E1108 00:25:56.270579 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.270791 kubelet[3231]: E1108 00:25:56.270767 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.270791 kubelet[3231]: W1108 00:25:56.270785 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.270938 kubelet[3231]: E1108 00:25:56.270797 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.271010 kubelet[3231]: E1108 00:25:56.270990 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.271010 kubelet[3231]: W1108 00:25:56.271000 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.271147 kubelet[3231]: E1108 00:25:56.271012 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.271215 kubelet[3231]: E1108 00:25:56.271198 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.271215 kubelet[3231]: W1108 00:25:56.271210 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.271340 kubelet[3231]: E1108 00:25:56.271223 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.271439 kubelet[3231]: E1108 00:25:56.271419 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.271439 kubelet[3231]: W1108 00:25:56.271435 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.271670 kubelet[3231]: E1108 00:25:56.271450 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.271670 kubelet[3231]: E1108 00:25:56.271643 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.271670 kubelet[3231]: W1108 00:25:56.271654 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.271670 kubelet[3231]: E1108 00:25:56.271665 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.271921 kubelet[3231]: E1108 00:25:56.271870 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.271921 kubelet[3231]: W1108 00:25:56.271880 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.271921 kubelet[3231]: E1108 00:25:56.271892 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.295521 kubelet[3231]: E1108 00:25:56.295492 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.295521 kubelet[3231]: W1108 00:25:56.295514 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.295687 kubelet[3231]: E1108 00:25:56.295533 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.295867 kubelet[3231]: E1108 00:25:56.295847 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.295867 kubelet[3231]: W1108 00:25:56.295862 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.296063 kubelet[3231]: E1108 00:25:56.295877 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.296136 kubelet[3231]: E1108 00:25:56.296125 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.296198 kubelet[3231]: W1108 00:25:56.296138 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.296198 kubelet[3231]: E1108 00:25:56.296152 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.296832 kubelet[3231]: E1108 00:25:56.296798 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.296832 kubelet[3231]: W1108 00:25:56.296813 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.296832 kubelet[3231]: E1108 00:25:56.296829 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.297101 kubelet[3231]: E1108 00:25:56.297080 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.297101 kubelet[3231]: W1108 00:25:56.297098 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.297225 kubelet[3231]: E1108 00:25:56.297112 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.297370 kubelet[3231]: E1108 00:25:56.297355 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.297460 kubelet[3231]: W1108 00:25:56.297370 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.297460 kubelet[3231]: E1108 00:25:56.297384 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.297635 kubelet[3231]: E1108 00:25:56.297619 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.297635 kubelet[3231]: W1108 00:25:56.297633 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.297774 kubelet[3231]: E1108 00:25:56.297649 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.297953 kubelet[3231]: E1108 00:25:56.297927 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.297953 kubelet[3231]: W1108 00:25:56.297949 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.298152 kubelet[3231]: E1108 00:25:56.297963 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.298223 kubelet[3231]: E1108 00:25:56.298206 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.298223 kubelet[3231]: W1108 00:25:56.298221 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.298326 kubelet[3231]: E1108 00:25:56.298235 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.298601 kubelet[3231]: E1108 00:25:56.298582 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.298601 kubelet[3231]: W1108 00:25:56.298596 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.298796 kubelet[3231]: E1108 00:25:56.298610 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.298907 kubelet[3231]: E1108 00:25:56.298886 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.298907 kubelet[3231]: W1108 00:25:56.298902 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.298997 kubelet[3231]: E1108 00:25:56.298916 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.299187 kubelet[3231]: E1108 00:25:56.299170 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.299187 kubelet[3231]: W1108 00:25:56.299184 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.299299 kubelet[3231]: E1108 00:25:56.299198 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.299800 kubelet[3231]: E1108 00:25:56.299782 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.300113 kubelet[3231]: W1108 00:25:56.299855 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.300113 kubelet[3231]: E1108 00:25:56.299877 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.300238 kubelet[3231]: E1108 00:25:56.300132 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.300238 kubelet[3231]: W1108 00:25:56.300144 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.300238 kubelet[3231]: E1108 00:25:56.300157 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.300423 kubelet[3231]: E1108 00:25:56.300372 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.300423 kubelet[3231]: W1108 00:25:56.300382 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.300423 kubelet[3231]: E1108 00:25:56.300396 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.300623 kubelet[3231]: E1108 00:25:56.300603 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.300623 kubelet[3231]: W1108 00:25:56.300617 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.300757 kubelet[3231]: E1108 00:25:56.300629 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.300900 kubelet[3231]: E1108 00:25:56.300885 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.300900 kubelet[3231]: W1108 00:25:56.300898 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.300994 kubelet[3231]: E1108 00:25:56.300912 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.301248 kubelet[3231]: E1108 00:25:56.301231 3231 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 8 00:25:56.301248 kubelet[3231]: W1108 00:25:56.301244 3231 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 8 00:25:56.301334 kubelet[3231]: E1108 00:25:56.301257 3231 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 8 00:25:56.702148 containerd[1730]: time="2025-11-08T00:25:56.702094887Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:56.706226 containerd[1730]: time="2025-11-08T00:25:56.706057845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 8 00:25:56.711864 containerd[1730]: time="2025-11-08T00:25:56.711800830Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:56.717427 containerd[1730]: time="2025-11-08T00:25:56.717371812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:25:56.718462 containerd[1730]: time="2025-11-08T00:25:56.717984822Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.368363437s" Nov 8 00:25:56.718462 containerd[1730]: time="2025-11-08T00:25:56.718025622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 8 00:25:56.727804 containerd[1730]: time="2025-11-08T00:25:56.727773366Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 8 00:25:56.777776 containerd[1730]: time="2025-11-08T00:25:56.777706505Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430\"" Nov 8 00:25:56.778651 containerd[1730]: time="2025-11-08T00:25:56.778609518Z" level=info msg="StartContainer for \"f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430\"" Nov 8 00:25:56.816863 systemd[1]: Started cri-containerd-f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430.scope - libcontainer container f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430. Nov 8 00:25:56.851134 containerd[1730]: time="2025-11-08T00:25:56.851089190Z" level=info msg="StartContainer for \"f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430\" returns successfully" Nov 8 00:25:56.859973 systemd[1]: cri-containerd-f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430.scope: Deactivated successfully. Nov 8 00:25:57.215198 kubelet[3231]: I1108 00:25:57.214785 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:25:57.233200 kubelet[3231]: I1108 00:25:57.231758 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6947dd6656-hnvfk" podStartSLOduration=3.438728213 podStartE2EDuration="6.23173672s" podCreationTimestamp="2025-11-08 00:25:51 +0000 UTC" firstStartedPulling="2025-11-08 00:25:52.556371373 +0000 UTC m=+22.140911573" lastFinishedPulling="2025-11-08 00:25:55.34937998 +0000 UTC m=+24.933920080" observedRunningTime="2025-11-08 00:25:56.225255334 +0000 UTC m=+25.809795434" watchObservedRunningTime="2025-11-08 00:25:57.23173672 +0000 UTC m=+26.816276920" Nov 8 00:25:57.356423 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430-rootfs.mount: Deactivated successfully. Nov 8 00:25:58.129531 kubelet[3231]: E1108 00:25:58.129355 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:25:58.363315 containerd[1730]: time="2025-11-08T00:25:58.363245054Z" level=info msg="shim disconnected" id=f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430 namespace=k8s.io Nov 8 00:25:58.363315 containerd[1730]: time="2025-11-08T00:25:58.363306055Z" level=warning msg="cleaning up after shim disconnected" id=f253eb8a3e178057463e7d324cc9becc5b456f84164a645c08a597bcd8f0b430 namespace=k8s.io Nov 8 00:25:58.363315 containerd[1730]: time="2025-11-08T00:25:58.363317555Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:25:59.222366 containerd[1730]: time="2025-11-08T00:25:59.222294555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 8 00:26:00.129235 kubelet[3231]: E1108 00:26:00.129181 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:02.128399 kubelet[3231]: E1108 00:26:02.128337 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:03.469958 containerd[1730]: time="2025-11-08T00:26:03.469902055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.472839 containerd[1730]: time="2025-11-08T00:26:03.472672897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 8 00:26:03.475827 containerd[1730]: time="2025-11-08T00:26:03.475591841Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.479679 containerd[1730]: time="2025-11-08T00:26:03.479643603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:03.480399 containerd[1730]: time="2025-11-08T00:26:03.480364314Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.258016657s" Nov 8 00:26:03.480477 containerd[1730]: time="2025-11-08T00:26:03.480406714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 8 00:26:03.488087 containerd[1730]: time="2025-11-08T00:26:03.488044230Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 8 00:26:03.528088 containerd[1730]: time="2025-11-08T00:26:03.528045434Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb\"" Nov 8 00:26:03.528588 containerd[1730]: time="2025-11-08T00:26:03.528486141Z" level=info msg="StartContainer for \"c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb\"" Nov 8 00:26:03.564880 systemd[1]: Started cri-containerd-c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb.scope - libcontainer container c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb. Nov 8 00:26:03.598383 containerd[1730]: time="2025-11-08T00:26:03.597289980Z" level=info msg="StartContainer for \"c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb\" returns successfully" Nov 8 00:26:04.129197 kubelet[3231]: E1108 00:26:04.129123 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:05.183256 containerd[1730]: time="2025-11-08T00:26:05.183166543Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Nov 8 00:26:05.185868 systemd[1]: cri-containerd-c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb.scope: Deactivated successfully. Nov 8 00:26:05.209450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb-rootfs.mount: Deactivated successfully. Nov 8 00:26:05.276492 kubelet[3231]: I1108 00:26:05.276455 3231 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 8 00:26:06.418228 systemd[1]: Created slice kubepods-besteffort-pod77eae253_1bce_4de5_8e9b_23a9c58b4ee0.slice - libcontainer container kubepods-besteffort-pod77eae253_1bce_4de5_8e9b_23a9c58b4ee0.slice. Nov 8 00:26:06.423878 containerd[1730]: time="2025-11-08T00:26:06.422924076Z" level=info msg="shim disconnected" id=c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb namespace=k8s.io Nov 8 00:26:06.424269 containerd[1730]: time="2025-11-08T00:26:06.423879290Z" level=warning msg="cleaning up after shim disconnected" id=c76ea60ccf9a64108da54924844534e8170f8fe0c8e5bc46b32cde75e5a941cb namespace=k8s.io Nov 8 00:26:06.424269 containerd[1730]: time="2025-11-08T00:26:06.423898391Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:26:06.433310 systemd[1]: Created slice kubepods-burstable-poddd3dad8e_8761_486d_82e7_516eeba2f8a7.slice - libcontainer container kubepods-burstable-poddd3dad8e_8761_486d_82e7_516eeba2f8a7.slice. Nov 8 00:26:06.444293 containerd[1730]: time="2025-11-08T00:26:06.442598473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbfws,Uid:77eae253-1bce-4de5-8e9b-23a9c58b4ee0,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:06.447794 systemd[1]: Created slice kubepods-besteffort-pod0ce899e0_d12a_4abb_b40d_26c4cc149868.slice - libcontainer container kubepods-besteffort-pod0ce899e0_d12a_4abb_b40d_26c4cc149868.slice. Nov 8 00:26:06.469447 systemd[1]: Created slice kubepods-besteffort-pod928ddcfe_b055_4feb_bfb6_23dedc6fa744.slice - libcontainer container kubepods-besteffort-pod928ddcfe_b055_4feb_bfb6_23dedc6fa744.slice. Nov 8 00:26:06.473607 kubelet[3231]: I1108 00:26:06.473574 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5px\" (UniqueName: \"kubernetes.io/projected/0ce899e0-d12a-4abb-b40d-26c4cc149868-kube-api-access-qc5px\") pod \"calico-apiserver-74448999c6-6grzk\" (UID: \"0ce899e0-d12a-4abb-b40d-26c4cc149868\") " pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" Nov 8 00:26:06.474309 kubelet[3231]: I1108 00:26:06.474274 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0ce899e0-d12a-4abb-b40d-26c4cc149868-calico-apiserver-certs\") pod \"calico-apiserver-74448999c6-6grzk\" (UID: \"0ce899e0-d12a-4abb-b40d-26c4cc149868\") " pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" Nov 8 00:26:06.474405 kubelet[3231]: I1108 00:26:06.474328 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnn8w\" (UniqueName: \"kubernetes.io/projected/928ddcfe-b055-4feb-bfb6-23dedc6fa744-kube-api-access-bnn8w\") pod \"calico-kube-controllers-598d7bd9d8-kgsh2\" (UID: \"928ddcfe-b055-4feb-bfb6-23dedc6fa744\") " pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" Nov 8 00:26:06.474405 kubelet[3231]: I1108 00:26:06.474356 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xw4d\" (UniqueName: \"kubernetes.io/projected/dd3dad8e-8761-486d-82e7-516eeba2f8a7-kube-api-access-5xw4d\") pod \"coredns-66bc5c9577-tn4hf\" (UID: \"dd3dad8e-8761-486d-82e7-516eeba2f8a7\") " pod="kube-system/coredns-66bc5c9577-tn4hf" Nov 8 00:26:06.474504 kubelet[3231]: I1108 00:26:06.474384 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dd3dad8e-8761-486d-82e7-516eeba2f8a7-config-volume\") pod \"coredns-66bc5c9577-tn4hf\" (UID: \"dd3dad8e-8761-486d-82e7-516eeba2f8a7\") " pod="kube-system/coredns-66bc5c9577-tn4hf" Nov 8 00:26:06.474504 kubelet[3231]: I1108 00:26:06.474449 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/928ddcfe-b055-4feb-bfb6-23dedc6fa744-tigera-ca-bundle\") pod \"calico-kube-controllers-598d7bd9d8-kgsh2\" (UID: \"928ddcfe-b055-4feb-bfb6-23dedc6fa744\") " pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" Nov 8 00:26:06.482016 systemd[1]: Created slice kubepods-besteffort-pod136d7667_9127_4dfa_b5ce_1dde786b7211.slice - libcontainer container kubepods-besteffort-pod136d7667_9127_4dfa_b5ce_1dde786b7211.slice. Nov 8 00:26:06.491418 containerd[1730]: time="2025-11-08T00:26:06.491062706Z" level=warning msg="cleanup warnings time=\"2025-11-08T00:26:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 8 00:26:06.514239 systemd[1]: Created slice kubepods-besteffort-podfc5e470f_14b3_444c_ac8a_3efb084d3809.slice - libcontainer container kubepods-besteffort-podfc5e470f_14b3_444c_ac8a_3efb084d3809.slice. Nov 8 00:26:06.524393 systemd[1]: Created slice kubepods-burstable-poda589e2c8_4bc2_4178_8ecb_3723aaa6f7a2.slice - libcontainer container kubepods-burstable-poda589e2c8_4bc2_4178_8ecb_3723aaa6f7a2.slice. Nov 8 00:26:06.540872 systemd[1]: Created slice kubepods-besteffort-podee1131d4_80d0_4ed0_9ef4_8758722912cb.slice - libcontainer container kubepods-besteffort-podee1131d4_80d0_4ed0_9ef4_8758722912cb.slice. Nov 8 00:26:06.574957 kubelet[3231]: I1108 00:26:06.574909 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2-config-volume\") pod \"coredns-66bc5c9577-llsdc\" (UID: \"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2\") " pod="kube-system/coredns-66bc5c9577-llsdc" Nov 8 00:26:06.575134 kubelet[3231]: I1108 00:26:06.574982 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-ca-bundle\") pod \"whisker-57fdf8d985-zxfxb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " pod="calico-system/whisker-57fdf8d985-zxfxb" Nov 8 00:26:06.575134 kubelet[3231]: I1108 00:26:06.575008 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-backend-key-pair\") pod \"whisker-57fdf8d985-zxfxb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " pod="calico-system/whisker-57fdf8d985-zxfxb" Nov 8 00:26:06.575134 kubelet[3231]: I1108 00:26:06.575026 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fc5e470f-14b3-444c-ac8a-3efb084d3809-goldmane-key-pair\") pod \"goldmane-7c778bb748-sn8mq\" (UID: \"fc5e470f-14b3-444c-ac8a-3efb084d3809\") " pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:06.575134 kubelet[3231]: I1108 00:26:06.575063 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzpf8\" (UniqueName: \"kubernetes.io/projected/ee1131d4-80d0-4ed0-9ef4-8758722912cb-kube-api-access-gzpf8\") pod \"whisker-57fdf8d985-zxfxb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " pod="calico-system/whisker-57fdf8d985-zxfxb" Nov 8 00:26:06.575134 kubelet[3231]: I1108 00:26:06.575088 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc5e470f-14b3-444c-ac8a-3efb084d3809-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-sn8mq\" (UID: \"fc5e470f-14b3-444c-ac8a-3efb084d3809\") " pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:06.575344 kubelet[3231]: I1108 00:26:06.575109 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/136d7667-9127-4dfa-b5ce-1dde786b7211-calico-apiserver-certs\") pod \"calico-apiserver-74448999c6-ltcnv\" (UID: \"136d7667-9127-4dfa-b5ce-1dde786b7211\") " pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" Nov 8 00:26:06.575344 kubelet[3231]: I1108 00:26:06.575128 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hssjn\" (UniqueName: \"kubernetes.io/projected/136d7667-9127-4dfa-b5ce-1dde786b7211-kube-api-access-hssjn\") pod \"calico-apiserver-74448999c6-ltcnv\" (UID: \"136d7667-9127-4dfa-b5ce-1dde786b7211\") " pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" Nov 8 00:26:06.575344 kubelet[3231]: I1108 00:26:06.575170 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzmk\" (UniqueName: \"kubernetes.io/projected/fc5e470f-14b3-444c-ac8a-3efb084d3809-kube-api-access-pnzmk\") pod \"goldmane-7c778bb748-sn8mq\" (UID: \"fc5e470f-14b3-444c-ac8a-3efb084d3809\") " pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:06.575344 kubelet[3231]: I1108 00:26:06.575194 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwt8\" (UniqueName: \"kubernetes.io/projected/a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2-kube-api-access-7fwt8\") pod \"coredns-66bc5c9577-llsdc\" (UID: \"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2\") " pod="kube-system/coredns-66bc5c9577-llsdc" Nov 8 00:26:06.575344 kubelet[3231]: I1108 00:26:06.575233 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc5e470f-14b3-444c-ac8a-3efb084d3809-config\") pod \"goldmane-7c778bb748-sn8mq\" (UID: \"fc5e470f-14b3-444c-ac8a-3efb084d3809\") " pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:06.639232 containerd[1730]: time="2025-11-08T00:26:06.639179644Z" level=error msg="Failed to destroy network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.639834 containerd[1730]: time="2025-11-08T00:26:06.639784953Z" level=error msg="encountered an error cleaning up failed sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.640192 containerd[1730]: time="2025-11-08T00:26:06.639865354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbfws,Uid:77eae253-1bce-4de5-8e9b-23a9c58b4ee0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.640384 kubelet[3231]: E1108 00:26:06.640349 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.640482 kubelet[3231]: E1108 00:26:06.640415 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbfws" Nov 8 00:26:06.640482 kubelet[3231]: E1108 00:26:06.640442 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kbfws" Nov 8 00:26:06.640570 kubelet[3231]: E1108 00:26:06.640512 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:06.748951 containerd[1730]: time="2025-11-08T00:26:06.748806100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4hf,Uid:dd3dad8e-8761-486d-82e7-516eeba2f8a7,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:06.761692 containerd[1730]: time="2025-11-08T00:26:06.761637494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-6grzk,Uid:0ce899e0-d12a-4abb-b40d-26c4cc149868,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:06.781817 containerd[1730]: time="2025-11-08T00:26:06.781770598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d7bd9d8-kgsh2,Uid:928ddcfe-b055-4feb-bfb6-23dedc6fa744,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:06.793746 containerd[1730]: time="2025-11-08T00:26:06.793685878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-ltcnv,Uid:136d7667-9127-4dfa-b5ce-1dde786b7211,Namespace:calico-apiserver,Attempt:0,}" Nov 8 00:26:06.824753 containerd[1730]: time="2025-11-08T00:26:06.824681147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sn8mq,Uid:fc5e470f-14b3-444c-ac8a-3efb084d3809,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:06.841745 containerd[1730]: time="2025-11-08T00:26:06.839804275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llsdc,Uid:a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2,Namespace:kube-system,Attempt:0,}" Nov 8 00:26:06.856364 containerd[1730]: time="2025-11-08T00:26:06.856318925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fdf8d985-zxfxb,Uid:ee1131d4-80d0-4ed0-9ef4-8758722912cb,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:06.868375 containerd[1730]: time="2025-11-08T00:26:06.868288005Z" level=error msg="Failed to destroy network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.868675 containerd[1730]: time="2025-11-08T00:26:06.868635411Z" level=error msg="encountered an error cleaning up failed sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.868803 containerd[1730]: time="2025-11-08T00:26:06.868699612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4hf,Uid:dd3dad8e-8761-486d-82e7-516eeba2f8a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.869440 kubelet[3231]: E1108 00:26:06.868931 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.869440 kubelet[3231]: E1108 00:26:06.868993 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tn4hf" Nov 8 00:26:06.869440 kubelet[3231]: E1108 00:26:06.869018 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tn4hf" Nov 8 00:26:06.869632 kubelet[3231]: E1108 00:26:06.869096 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tn4hf_kube-system(dd3dad8e-8761-486d-82e7-516eeba2f8a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tn4hf_kube-system(dd3dad8e-8761-486d-82e7-516eeba2f8a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tn4hf" podUID="dd3dad8e-8761-486d-82e7-516eeba2f8a7" Nov 8 00:26:06.913889 containerd[1730]: time="2025-11-08T00:26:06.913837394Z" level=error msg="Failed to destroy network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.914181 containerd[1730]: time="2025-11-08T00:26:06.914139398Z" level=error msg="encountered an error cleaning up failed sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.914268 containerd[1730]: time="2025-11-08T00:26:06.914196699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-6grzk,Uid:0ce899e0-d12a-4abb-b40d-26c4cc149868,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.914488 kubelet[3231]: E1108 00:26:06.914451 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:06.914571 kubelet[3231]: E1108 00:26:06.914515 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" Nov 8 00:26:06.914571 kubelet[3231]: E1108 00:26:06.914544 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" Nov 8 00:26:06.914654 kubelet[3231]: E1108 00:26:06.914609 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:06.980607 kubelet[3231]: I1108 00:26:06.980268 3231 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 8 00:26:07.112758 containerd[1730]: time="2025-11-08T00:26:07.111970188Z" level=error msg="Failed to destroy network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.112758 containerd[1730]: time="2025-11-08T00:26:07.112527596Z" level=error msg="encountered an error cleaning up failed sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.112758 containerd[1730]: time="2025-11-08T00:26:07.112588597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d7bd9d8-kgsh2,Uid:928ddcfe-b055-4feb-bfb6-23dedc6fa744,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.113789 kubelet[3231]: E1108 00:26:07.113302 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.113789 kubelet[3231]: E1108 00:26:07.113375 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" Nov 8 00:26:07.113789 kubelet[3231]: E1108 00:26:07.113407 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" Nov 8 00:26:07.114006 kubelet[3231]: E1108 00:26:07.113482 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:07.140211 containerd[1730]: time="2025-11-08T00:26:07.140159613Z" level=error msg="Failed to destroy network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.141209 containerd[1730]: time="2025-11-08T00:26:07.141171029Z" level=error msg="encountered an error cleaning up failed sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.141389 containerd[1730]: time="2025-11-08T00:26:07.141359732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-ltcnv,Uid:136d7667-9127-4dfa-b5ce-1dde786b7211,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.143221 kubelet[3231]: E1108 00:26:07.142161 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.143221 kubelet[3231]: E1108 00:26:07.142240 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" Nov 8 00:26:07.143221 kubelet[3231]: E1108 00:26:07.142268 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" Nov 8 00:26:07.143438 kubelet[3231]: E1108 00:26:07.142335 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:07.157229 containerd[1730]: time="2025-11-08T00:26:07.157159970Z" level=error msg="Failed to destroy network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.157836 containerd[1730]: time="2025-11-08T00:26:07.157744679Z" level=error msg="encountered an error cleaning up failed sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.158067 containerd[1730]: time="2025-11-08T00:26:07.158016183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sn8mq,Uid:fc5e470f-14b3-444c-ac8a-3efb084d3809,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.158465 kubelet[3231]: E1108 00:26:07.158431 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.158850 kubelet[3231]: E1108 00:26:07.158660 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:07.158850 kubelet[3231]: E1108 00:26:07.158696 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-sn8mq" Nov 8 00:26:07.160218 kubelet[3231]: E1108 00:26:07.158815 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:07.175046 containerd[1730]: time="2025-11-08T00:26:07.174270029Z" level=error msg="Failed to destroy network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175046 containerd[1730]: time="2025-11-08T00:26:07.174792037Z" level=error msg="encountered an error cleaning up failed sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175046 containerd[1730]: time="2025-11-08T00:26:07.174871638Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llsdc,Uid:a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175648 containerd[1730]: time="2025-11-08T00:26:07.175105841Z" level=error msg="Failed to destroy network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175648 containerd[1730]: time="2025-11-08T00:26:07.175528648Z" level=error msg="encountered an error cleaning up failed sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175759 kubelet[3231]: E1108 00:26:07.175323 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.175833 containerd[1730]: time="2025-11-08T00:26:07.175657950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-57fdf8d985-zxfxb,Uid:ee1131d4-80d0-4ed0-9ef4-8758722912cb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.176023 kubelet[3231]: E1108 00:26:07.175991 3231 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.176103 kubelet[3231]: E1108 00:26:07.176034 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fdf8d985-zxfxb" Nov 8 00:26:07.176103 kubelet[3231]: E1108 00:26:07.176058 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-57fdf8d985-zxfxb" Nov 8 00:26:07.176193 kubelet[3231]: E1108 00:26:07.176120 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-57fdf8d985-zxfxb_calico-system(ee1131d4-80d0-4ed0-9ef4-8758722912cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-57fdf8d985-zxfxb_calico-system(ee1131d4-80d0-4ed0-9ef4-8758722912cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fdf8d985-zxfxb" podUID="ee1131d4-80d0-4ed0-9ef4-8758722912cb" Nov 8 00:26:07.176361 kubelet[3231]: E1108 00:26:07.175389 3231 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-llsdc" Nov 8 00:26:07.176432 kubelet[3231]: E1108 00:26:07.176365 3231 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-llsdc" Nov 8 00:26:07.176479 kubelet[3231]: E1108 00:26:07.176441 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-llsdc_kube-system(a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-llsdc_kube-system(a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-llsdc" podUID="a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2" Nov 8 00:26:07.240934 kubelet[3231]: I1108 00:26:07.240900 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:07.242118 containerd[1730]: time="2025-11-08T00:26:07.241790149Z" level=info msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" Nov 8 00:26:07.242118 containerd[1730]: time="2025-11-08T00:26:07.241998652Z" level=info msg="Ensure that sandbox 53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591 in task-service has been cleanup successfully" Nov 8 00:26:07.245342 kubelet[3231]: I1108 00:26:07.245313 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:07.247211 containerd[1730]: time="2025-11-08T00:26:07.247178631Z" level=info msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" Nov 8 00:26:07.247389 containerd[1730]: time="2025-11-08T00:26:07.247365233Z" level=info msg="Ensure that sandbox 1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54 in task-service has been cleanup successfully" Nov 8 00:26:07.250673 kubelet[3231]: I1108 00:26:07.250602 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:07.251267 containerd[1730]: time="2025-11-08T00:26:07.251190691Z" level=info msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" Nov 8 00:26:07.251615 containerd[1730]: time="2025-11-08T00:26:07.251564297Z" level=info msg="Ensure that sandbox 81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279 in task-service has been cleanup successfully" Nov 8 00:26:07.255821 kubelet[3231]: I1108 00:26:07.255531 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:07.257160 containerd[1730]: time="2025-11-08T00:26:07.257000379Z" level=info msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" Nov 8 00:26:07.258389 containerd[1730]: time="2025-11-08T00:26:07.258323799Z" level=info msg="Ensure that sandbox 1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd in task-service has been cleanup successfully" Nov 8 00:26:07.258648 kubelet[3231]: I1108 00:26:07.258540 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:07.261255 containerd[1730]: time="2025-11-08T00:26:07.260864937Z" level=info msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" Nov 8 00:26:07.261255 containerd[1730]: time="2025-11-08T00:26:07.261033840Z" level=info msg="Ensure that sandbox 4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950 in task-service has been cleanup successfully" Nov 8 00:26:07.275741 containerd[1730]: time="2025-11-08T00:26:07.274693546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 8 00:26:07.279421 kubelet[3231]: I1108 00:26:07.279391 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:07.285742 containerd[1730]: time="2025-11-08T00:26:07.284565495Z" level=info msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" Nov 8 00:26:07.285999 containerd[1730]: time="2025-11-08T00:26:07.285957816Z" level=info msg="Ensure that sandbox 2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44 in task-service has been cleanup successfully" Nov 8 00:26:07.290792 kubelet[3231]: I1108 00:26:07.290766 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:07.293200 containerd[1730]: time="2025-11-08T00:26:07.293169925Z" level=info msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" Nov 8 00:26:07.293486 containerd[1730]: time="2025-11-08T00:26:07.293351828Z" level=info msg="Ensure that sandbox a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762 in task-service has been cleanup successfully" Nov 8 00:26:07.294567 kubelet[3231]: I1108 00:26:07.294168 3231 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:07.297305 containerd[1730]: time="2025-11-08T00:26:07.297015184Z" level=info msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" Nov 8 00:26:07.297305 containerd[1730]: time="2025-11-08T00:26:07.297223987Z" level=info msg="Ensure that sandbox fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5 in task-service has been cleanup successfully" Nov 8 00:26:07.368714 containerd[1730]: time="2025-11-08T00:26:07.367631169Z" level=error msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" failed" error="failed to destroy network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.369256 kubelet[3231]: E1108 00:26:07.369187 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:07.369447 kubelet[3231]: E1108 00:26:07.369284 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591"} Nov 8 00:26:07.369447 kubelet[3231]: E1108 00:26:07.369352 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fc5e470f-14b3-444c-ac8a-3efb084d3809\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.369447 kubelet[3231]: E1108 00:26:07.369389 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fc5e470f-14b3-444c-ac8a-3efb084d3809\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:07.397062 containerd[1730]: time="2025-11-08T00:26:07.396920842Z" level=error msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" failed" error="failed to destroy network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.397220 kubelet[3231]: E1108 00:26:07.397171 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:07.397287 kubelet[3231]: E1108 00:26:07.397214 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54"} Nov 8 00:26:07.397287 kubelet[3231]: E1108 00:26:07.397249 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.397413 kubelet[3231]: E1108 00:26:07.397279 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-57fdf8d985-zxfxb" podUID="ee1131d4-80d0-4ed0-9ef4-8758722912cb" Nov 8 00:26:07.429149 containerd[1730]: time="2025-11-08T00:26:07.428753794Z" level=error msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" failed" error="failed to destroy network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.429607 kubelet[3231]: E1108 00:26:07.429095 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:07.429607 kubelet[3231]: E1108 00:26:07.429164 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44"} Nov 8 00:26:07.429607 kubelet[3231]: E1108 00:26:07.429202 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.429607 kubelet[3231]: E1108 00:26:07.429239 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-llsdc" podUID="a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2" Nov 8 00:26:07.430426 kubelet[3231]: E1108 00:26:07.430156 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:07.430426 kubelet[3231]: E1108 00:26:07.430203 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950"} Nov 8 00:26:07.430426 kubelet[3231]: E1108 00:26:07.430237 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dd3dad8e-8761-486d-82e7-516eeba2f8a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.430426 kubelet[3231]: E1108 00:26:07.430275 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dd3dad8e-8761-486d-82e7-516eeba2f8a7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tn4hf" podUID="dd3dad8e-8761-486d-82e7-516eeba2f8a7" Nov 8 00:26:07.430635 containerd[1730]: time="2025-11-08T00:26:07.429878713Z" level=error msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" failed" error="failed to destroy network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.439753 containerd[1730]: time="2025-11-08T00:26:07.439016464Z" level=error msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" failed" error="failed to destroy network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.439913 kubelet[3231]: E1108 00:26:07.439305 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:07.439913 kubelet[3231]: E1108 00:26:07.439361 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279"} Nov 8 00:26:07.439913 kubelet[3231]: E1108 00:26:07.439400 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"136d7667-9127-4dfa-b5ce-1dde786b7211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.439913 kubelet[3231]: E1108 00:26:07.439446 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"136d7667-9127-4dfa-b5ce-1dde786b7211\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:07.440782 containerd[1730]: time="2025-11-08T00:26:07.440716492Z" level=error msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" failed" error="failed to destroy network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.441154 kubelet[3231]: E1108 00:26:07.441117 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:07.441291 kubelet[3231]: E1108 00:26:07.441273 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5"} Nov 8 00:26:07.441385 kubelet[3231]: E1108 00:26:07.441370 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.441537 kubelet[3231]: E1108 00:26:07.441514 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77eae253-1bce-4de5-8e9b-23a9c58b4ee0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:07.443975 containerd[1730]: time="2025-11-08T00:26:07.443854943Z" level=error msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" failed" error="failed to destroy network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.444349 kubelet[3231]: E1108 00:26:07.444065 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:07.444349 kubelet[3231]: E1108 00:26:07.444117 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd"} Nov 8 00:26:07.444349 kubelet[3231]: E1108 00:26:07.444155 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0ce899e0-d12a-4abb-b40d-26c4cc149868\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.444349 kubelet[3231]: E1108 00:26:07.444188 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0ce899e0-d12a-4abb-b40d-26c4cc149868\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:07.449103 containerd[1730]: time="2025-11-08T00:26:07.449051529Z" level=error msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" failed" error="failed to destroy network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 8 00:26:07.449344 kubelet[3231]: E1108 00:26:07.449305 3231 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:07.449432 kubelet[3231]: E1108 00:26:07.449350 3231 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762"} Nov 8 00:26:07.449432 kubelet[3231]: E1108 00:26:07.449384 3231 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"928ddcfe-b055-4feb-bfb6-23dedc6fa744\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 8 00:26:07.449432 kubelet[3231]: E1108 00:26:07.449418 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"928ddcfe-b055-4feb-bfb6-23dedc6fa744\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:07.507036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5-shm.mount: Deactivated successfully. Nov 8 00:26:15.597730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1123302813.mount: Deactivated successfully. Nov 8 00:26:15.635792 containerd[1730]: time="2025-11-08T00:26:15.635741321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.638824 containerd[1730]: time="2025-11-08T00:26:15.638678361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 8 00:26:15.642739 containerd[1730]: time="2025-11-08T00:26:15.641787103Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.646636 containerd[1730]: time="2025-11-08T00:26:15.645973060Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:26:15.646636 containerd[1730]: time="2025-11-08T00:26:15.646495867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.371529517s" Nov 8 00:26:15.646636 containerd[1730]: time="2025-11-08T00:26:15.646531467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 8 00:26:15.671099 containerd[1730]: time="2025-11-08T00:26:15.671063900Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 8 00:26:15.734959 containerd[1730]: time="2025-11-08T00:26:15.734910965Z" level=info msg="CreateContainer within sandbox \"8313fe7e4f05a7ac191a0ebc12ef0de4e74a2480a2be3ed47e58d372533415db\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f\"" Nov 8 00:26:15.735749 containerd[1730]: time="2025-11-08T00:26:15.735503573Z" level=info msg="StartContainer for \"d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f\"" Nov 8 00:26:15.763883 systemd[1]: Started cri-containerd-d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f.scope - libcontainer container d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f. Nov 8 00:26:15.797985 containerd[1730]: time="2025-11-08T00:26:15.797944519Z" level=info msg="StartContainer for \"d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f\" returns successfully" Nov 8 00:26:16.203407 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 8 00:26:16.203581 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 8 00:26:16.316482 containerd[1730]: time="2025-11-08T00:26:16.316421743Z" level=info msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" Nov 8 00:26:16.392775 kubelet[3231]: I1108 00:26:16.392249 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jz75c" podStartSLOduration=1.456844466 podStartE2EDuration="24.39222777s" podCreationTimestamp="2025-11-08 00:25:52 +0000 UTC" firstStartedPulling="2025-11-08 00:25:52.712064876 +0000 UTC m=+22.296604976" lastFinishedPulling="2025-11-08 00:26:15.64744818 +0000 UTC m=+45.231988280" observedRunningTime="2025-11-08 00:26:16.390403846 +0000 UTC m=+45.974944046" watchObservedRunningTime="2025-11-08 00:26:16.39222777 +0000 UTC m=+45.976767970" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.438 [INFO][4439] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.438 [INFO][4439] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" iface="eth0" netns="/var/run/netns/cni-85bb01b6-a87f-c990-95e5-de8a2821490f" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.438 [INFO][4439] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" iface="eth0" netns="/var/run/netns/cni-85bb01b6-a87f-c990-95e5-de8a2821490f" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.439 [INFO][4439] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" iface="eth0" netns="/var/run/netns/cni-85bb01b6-a87f-c990-95e5-de8a2821490f" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.439 [INFO][4439] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.439 [INFO][4439] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.472 [INFO][4461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.472 [INFO][4461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.472 [INFO][4461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.478 [WARNING][4461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.478 [INFO][4461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.479 [INFO][4461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:16.486061 containerd[1730]: 2025-11-08 00:26:16.483 [INFO][4439] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:16.487676 containerd[1730]: time="2025-11-08T00:26:16.486782751Z" level=info msg="TearDown network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" successfully" Nov 8 00:26:16.487676 containerd[1730]: time="2025-11-08T00:26:16.486830652Z" level=info msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" returns successfully" Nov 8 00:26:16.553266 kubelet[3231]: I1108 00:26:16.552629 3231 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-ca-bundle\") pod \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " Nov 8 00:26:16.553266 kubelet[3231]: I1108 00:26:16.552716 3231 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-backend-key-pair\") pod \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " Nov 8 00:26:16.553266 kubelet[3231]: I1108 00:26:16.552774 3231 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzpf8\" (UniqueName: \"kubernetes.io/projected/ee1131d4-80d0-4ed0-9ef4-8758722912cb-kube-api-access-gzpf8\") pod \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\" (UID: \"ee1131d4-80d0-4ed0-9ef4-8758722912cb\") " Nov 8 00:26:16.553923 kubelet[3231]: I1108 00:26:16.553888 3231 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ee1131d4-80d0-4ed0-9ef4-8758722912cb" (UID: "ee1131d4-80d0-4ed0-9ef4-8758722912cb"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:26:16.558298 kubelet[3231]: I1108 00:26:16.558267 3231 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee1131d4-80d0-4ed0-9ef4-8758722912cb-kube-api-access-gzpf8" (OuterVolumeSpecName: "kube-api-access-gzpf8") pod "ee1131d4-80d0-4ed0-9ef4-8758722912cb" (UID: "ee1131d4-80d0-4ed0-9ef4-8758722912cb"). InnerVolumeSpecName "kube-api-access-gzpf8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:26:16.559251 kubelet[3231]: I1108 00:26:16.558771 3231 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ee1131d4-80d0-4ed0-9ef4-8758722912cb" (UID: "ee1131d4-80d0-4ed0-9ef4-8758722912cb"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:26:16.602702 systemd[1]: run-netns-cni\x2d85bb01b6\x2da87f\x2dc990\x2d95e5\x2dde8a2821490f.mount: Deactivated successfully. Nov 8 00:26:16.604943 systemd[1]: var-lib-kubelet-pods-ee1131d4\x2d80d0\x2d4ed0\x2d9ef4\x2d8758722912cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgzpf8.mount: Deactivated successfully. Nov 8 00:26:16.605210 systemd[1]: var-lib-kubelet-pods-ee1131d4\x2d80d0\x2d4ed0\x2d9ef4\x2d8758722912cb-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 8 00:26:16.653185 kubelet[3231]: I1108 00:26:16.653132 3231 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-2742f1d4ae\" DevicePath \"\"" Nov 8 00:26:16.653185 kubelet[3231]: I1108 00:26:16.653176 3231 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gzpf8\" (UniqueName: \"kubernetes.io/projected/ee1131d4-80d0-4ed0-9ef4-8758722912cb-kube-api-access-gzpf8\") on node \"ci-4081.3.6-n-2742f1d4ae\" DevicePath \"\"" Nov 8 00:26:16.653185 kubelet[3231]: I1108 00:26:16.653189 3231 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee1131d4-80d0-4ed0-9ef4-8758722912cb-whisker-ca-bundle\") on node \"ci-4081.3.6-n-2742f1d4ae\" DevicePath \"\"" Nov 8 00:26:17.134740 systemd[1]: Removed slice kubepods-besteffort-podee1131d4_80d0_4ed0_9ef4_8758722912cb.slice - libcontainer container kubepods-besteffort-podee1131d4_80d0_4ed0_9ef4_8758722912cb.slice. Nov 8 00:26:17.457780 systemd[1]: Created slice kubepods-besteffort-pod5b3e910f_ad36_41f9_8d09_74f4b684ee03.slice - libcontainer container kubepods-besteffort-pod5b3e910f_ad36_41f9_8d09_74f4b684ee03.slice. Nov 8 00:26:17.559225 kubelet[3231]: I1108 00:26:17.559136 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5b3e910f-ad36-41f9-8d09-74f4b684ee03-whisker-ca-bundle\") pod \"whisker-7747c7fc7-whbjm\" (UID: \"5b3e910f-ad36-41f9-8d09-74f4b684ee03\") " pod="calico-system/whisker-7747c7fc7-whbjm" Nov 8 00:26:17.559225 kubelet[3231]: I1108 00:26:17.559250 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6gjg\" (UniqueName: \"kubernetes.io/projected/5b3e910f-ad36-41f9-8d09-74f4b684ee03-kube-api-access-z6gjg\") pod \"whisker-7747c7fc7-whbjm\" (UID: \"5b3e910f-ad36-41f9-8d09-74f4b684ee03\") " pod="calico-system/whisker-7747c7fc7-whbjm" Nov 8 00:26:17.560032 kubelet[3231]: I1108 00:26:17.559287 3231 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5b3e910f-ad36-41f9-8d09-74f4b684ee03-whisker-backend-key-pair\") pod \"whisker-7747c7fc7-whbjm\" (UID: \"5b3e910f-ad36-41f9-8d09-74f4b684ee03\") " pod="calico-system/whisker-7747c7fc7-whbjm" Nov 8 00:26:17.768638 containerd[1730]: time="2025-11-08T00:26:17.768518317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7747c7fc7-whbjm,Uid:5b3e910f-ad36-41f9-8d09-74f4b684ee03,Namespace:calico-system,Attempt:0,}" Nov 8 00:26:17.982657 systemd-networkd[1352]: cali4856ed55d09: Link UP Nov 8 00:26:17.985308 systemd-networkd[1352]: cali4856ed55d09: Gained carrier Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.830 [INFO][4512] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.839 [INFO][4512] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0 whisker-7747c7fc7- calico-system 5b3e910f-ad36-41f9-8d09-74f4b684ee03 902 0 2025-11-08 00:26:17 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7747c7fc7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae whisker-7747c7fc7-whbjm eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4856ed55d09 [] [] }} ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.839 [INFO][4512] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.890 [INFO][4531] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" HandleID="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.890 [INFO][4531] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" HandleID="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"whisker-7747c7fc7-whbjm", "timestamp":"2025-11-08 00:26:17.890357268 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.891 [INFO][4531] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.891 [INFO][4531] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.891 [INFO][4531] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.902 [INFO][4531] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.907 [INFO][4531] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.912 [INFO][4531] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.917 [INFO][4531] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.919 [INFO][4531] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.920 [INFO][4531] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.922 [INFO][4531] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297 Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.929 [INFO][4531] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.941 [INFO][4531] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.65/26] block=192.168.59.64/26 handle="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.942 [INFO][4531] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.65/26] handle="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.942 [INFO][4531] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:18.014748 containerd[1730]: 2025-11-08 00:26:17.942 [INFO][4531] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.65/26] IPv6=[] ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" HandleID="k8s-pod-network.aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:17.946 [INFO][4512] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0", GenerateName:"whisker-7747c7fc7-", Namespace:"calico-system", SelfLink:"", UID:"5b3e910f-ad36-41f9-8d09-74f4b684ee03", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7747c7fc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"whisker-7747c7fc7-whbjm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4856ed55d09", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:17.946 [INFO][4512] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.65/32] ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:17.946 [INFO][4512] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4856ed55d09 ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:17.985 [INFO][4512] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:17.987 [INFO][4512] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0", GenerateName:"whisker-7747c7fc7-", Namespace:"calico-system", SelfLink:"", UID:"5b3e910f-ad36-41f9-8d09-74f4b684ee03", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 26, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7747c7fc7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297", Pod:"whisker-7747c7fc7-whbjm", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4856ed55d09", MAC:"52:71:1e:9e:d1:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:18.015716 containerd[1730]: 2025-11-08 00:26:18.005 [INFO][4512] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297" Namespace="calico-system" Pod="whisker-7747c7fc7-whbjm" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--7747c7fc7--whbjm-eth0" Nov 8 00:26:18.059442 containerd[1730]: time="2025-11-08T00:26:18.059267756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:18.060162 containerd[1730]: time="2025-11-08T00:26:18.059357658Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:18.060162 containerd[1730]: time="2025-11-08T00:26:18.059392758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:18.060162 containerd[1730]: time="2025-11-08T00:26:18.059511560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:18.097936 systemd[1]: Started cri-containerd-aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297.scope - libcontainer container aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297. Nov 8 00:26:18.181840 containerd[1730]: time="2025-11-08T00:26:18.181783916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7747c7fc7-whbjm,Uid:5b3e910f-ad36-41f9-8d09-74f4b684ee03,Namespace:calico-system,Attempt:0,} returns sandbox id \"aed888d869e22e99fa4bfac31daac1e7394182f72f9c5583dd91973bdb9f4297\"" Nov 8 00:26:18.186144 containerd[1730]: time="2025-11-08T00:26:18.186113375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:18.392085 kernel: bpftool[4679]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 8 00:26:18.444573 containerd[1730]: time="2025-11-08T00:26:18.444378674Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:18.447602 containerd[1730]: time="2025-11-08T00:26:18.447445016Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:18.447602 containerd[1730]: time="2025-11-08T00:26:18.447547217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:18.448206 kubelet[3231]: E1108 00:26:18.447994 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:18.448206 kubelet[3231]: E1108 00:26:18.448068 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:18.448608 kubelet[3231]: E1108 00:26:18.448397 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:18.451447 containerd[1730]: time="2025-11-08T00:26:18.451206067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:18.699357 containerd[1730]: time="2025-11-08T00:26:18.699233527Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:18.707236 containerd[1730]: time="2025-11-08T00:26:18.707189735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:18.707405 containerd[1730]: time="2025-11-08T00:26:18.707288336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:18.707531 kubelet[3231]: E1108 00:26:18.707482 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:18.709046 kubelet[3231]: E1108 00:26:18.707541 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:18.709046 kubelet[3231]: E1108 00:26:18.707661 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:18.709046 kubelet[3231]: E1108 00:26:18.707734 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:26:18.766879 systemd-networkd[1352]: vxlan.calico: Link UP Nov 8 00:26:18.766895 systemd-networkd[1352]: vxlan.calico: Gained carrier Nov 8 00:26:19.132030 containerd[1730]: time="2025-11-08T00:26:19.131682686Z" level=info msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" Nov 8 00:26:19.134264 kubelet[3231]: I1108 00:26:19.134227 3231 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee1131d4-80d0-4ed0-9ef4-8758722912cb" path="/var/lib/kubelet/pods/ee1131d4-80d0-4ed0-9ef4-8758722912cb/volumes" Nov 8 00:26:19.134751 containerd[1730]: time="2025-11-08T00:26:19.132687400Z" level=info msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.215 [INFO][4783] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.215 [INFO][4783] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" iface="eth0" netns="/var/run/netns/cni-010bf027-3587-e383-3156-1b29bee0d3ac" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.220 [INFO][4783] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" iface="eth0" netns="/var/run/netns/cni-010bf027-3587-e383-3156-1b29bee0d3ac" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.220 [INFO][4783] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" iface="eth0" netns="/var/run/netns/cni-010bf027-3587-e383-3156-1b29bee0d3ac" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.220 [INFO][4783] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.220 [INFO][4783] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.266 [INFO][4800] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.266 [INFO][4800] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.266 [INFO][4800] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.272 [WARNING][4800] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.272 [INFO][4800] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.273 [INFO][4800] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:19.278653 containerd[1730]: 2025-11-08 00:26:19.275 [INFO][4783] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:19.283458 containerd[1730]: time="2025-11-08T00:26:19.280572103Z" level=info msg="TearDown network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" successfully" Nov 8 00:26:19.283458 containerd[1730]: time="2025-11-08T00:26:19.280783106Z" level=info msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" returns successfully" Nov 8 00:26:19.287235 systemd[1]: run-netns-cni\x2d010bf027\x2d3587\x2de383\x2d3156\x2d1b29bee0d3ac.mount: Deactivated successfully. Nov 8 00:26:19.288796 containerd[1730]: time="2025-11-08T00:26:19.288466310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbfws,Uid:77eae253-1bce-4de5-8e9b-23a9c58b4ee0,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.218 [INFO][4784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.219 [INFO][4784] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" iface="eth0" netns="/var/run/netns/cni-e6f1c4db-5eef-678c-ed06-ca04b4a1326e" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.221 [INFO][4784] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" iface="eth0" netns="/var/run/netns/cni-e6f1c4db-5eef-678c-ed06-ca04b4a1326e" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.225 [INFO][4784] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" iface="eth0" netns="/var/run/netns/cni-e6f1c4db-5eef-678c-ed06-ca04b4a1326e" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.225 [INFO][4784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.225 [INFO][4784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.266 [INFO][4802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.266 [INFO][4802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.273 [INFO][4802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.283 [WARNING][4802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.283 [INFO][4802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.285 [INFO][4802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:19.290802 containerd[1730]: 2025-11-08 00:26:19.288 [INFO][4784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:19.293429 containerd[1730]: time="2025-11-08T00:26:19.292815969Z" level=info msg="TearDown network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" successfully" Nov 8 00:26:19.293429 containerd[1730]: time="2025-11-08T00:26:19.292840769Z" level=info msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" returns successfully" Nov 8 00:26:19.295697 systemd[1]: run-netns-cni\x2de6f1c4db\x2d5eef\x2d678c\x2ded06\x2dca04b4a1326e.mount: Deactivated successfully. Nov 8 00:26:19.298338 containerd[1730]: time="2025-11-08T00:26:19.298290443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llsdc,Uid:a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:19.364093 kubelet[3231]: E1108 00:26:19.363937 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:26:19.516601 systemd-networkd[1352]: cali2c3645d39d4: Link UP Nov 8 00:26:19.518922 systemd-networkd[1352]: cali2c3645d39d4: Gained carrier Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.419 [INFO][4825] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0 coredns-66bc5c9577- kube-system a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2 920 0 2025-11-08 00:25:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae coredns-66bc5c9577-llsdc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c3645d39d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.419 [INFO][4825] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.464 [INFO][4839] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" HandleID="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.465 [INFO][4839] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" HandleID="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5910), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"coredns-66bc5c9577-llsdc", "timestamp":"2025-11-08 00:26:19.464921001 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.465 [INFO][4839] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.465 [INFO][4839] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.465 [INFO][4839] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.472 [INFO][4839] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.478 [INFO][4839] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.481 [INFO][4839] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.483 [INFO][4839] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.484 [INFO][4839] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.484 [INFO][4839] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.489 [INFO][4839] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1 Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.493 [INFO][4839] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.500 [INFO][4839] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.66/26] block=192.168.59.64/26 handle="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.501 [INFO][4839] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.66/26] handle="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.501 [INFO][4839] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:19.536912 containerd[1730]: 2025-11-08 00:26:19.501 [INFO][4839] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.66/26] IPv6=[] ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" HandleID="k8s-pod-network.fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.537921 containerd[1730]: 2025-11-08 00:26:19.503 [INFO][4825] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"coredns-66bc5c9577-llsdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3645d39d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:19.537921 containerd[1730]: 2025-11-08 00:26:19.503 [INFO][4825] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.66/32] ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.537921 containerd[1730]: 2025-11-08 00:26:19.503 [INFO][4825] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c3645d39d4 ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.537921 containerd[1730]: 2025-11-08 00:26:19.521 [INFO][4825] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.537921 containerd[1730]: 2025-11-08 00:26:19.522 [INFO][4825] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1", Pod:"coredns-66bc5c9577-llsdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3645d39d4", MAC:"6a:e1:f9:aa:a6:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:19.538309 containerd[1730]: 2025-11-08 00:26:19.532 [INFO][4825] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1" Namespace="kube-system" Pod="coredns-66bc5c9577-llsdc" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:19.566483 containerd[1730]: time="2025-11-08T00:26:19.566377275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:19.566483 containerd[1730]: time="2025-11-08T00:26:19.566442876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:19.566848 containerd[1730]: time="2025-11-08T00:26:19.566463677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:19.566848 containerd[1730]: time="2025-11-08T00:26:19.566557378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:19.591893 systemd[1]: Started cri-containerd-fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1.scope - libcontainer container fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1. Nov 8 00:26:19.628843 systemd-networkd[1352]: cali289bc357945: Link UP Nov 8 00:26:19.629078 systemd-networkd[1352]: cali289bc357945: Gained carrier Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.420 [INFO][4815] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0 csi-node-driver- calico-system 77eae253-1bce-4de5-8e9b-23a9c58b4ee0 919 0 2025-11-08 00:25:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae csi-node-driver-kbfws eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali289bc357945 [] [] }} ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.420 [INFO][4815] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.465 [INFO][4841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" HandleID="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.466 [INFO][4841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" HandleID="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"csi-node-driver-kbfws", "timestamp":"2025-11-08 00:26:19.465928315 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.466 [INFO][4841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.501 [INFO][4841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.501 [INFO][4841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.574 [INFO][4841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.581 [INFO][4841] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.586 [INFO][4841] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.589 [INFO][4841] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.594 [INFO][4841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.594 [INFO][4841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.598 [INFO][4841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401 Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.605 [INFO][4841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.617 [INFO][4841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.67/26] block=192.168.59.64/26 handle="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.617 [INFO][4841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.67/26] handle="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.617 [INFO][4841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:19.655560 containerd[1730]: 2025-11-08 00:26:19.617 [INFO][4841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.67/26] IPv6=[] ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" HandleID="k8s-pod-network.0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.622 [INFO][4815] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77eae253-1bce-4de5-8e9b-23a9c58b4ee0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"csi-node-driver-kbfws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali289bc357945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.622 [INFO][4815] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.67/32] ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.623 [INFO][4815] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali289bc357945 ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.626 [INFO][4815] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.626 [INFO][4815] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77eae253-1bce-4de5-8e9b-23a9c58b4ee0", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401", Pod:"csi-node-driver-kbfws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali289bc357945", MAC:"62:67:c1:04:49:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:19.656541 containerd[1730]: 2025-11-08 00:26:19.648 [INFO][4815] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401" Namespace="calico-system" Pod="csi-node-driver-kbfws" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:19.686508 containerd[1730]: time="2025-11-08T00:26:19.686460402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-llsdc,Uid:a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2,Namespace:kube-system,Attempt:1,} returns sandbox id \"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1\"" Nov 8 00:26:19.700023 containerd[1730]: time="2025-11-08T00:26:19.699977686Z" level=info msg="CreateContainer within sandbox \"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:19.716101 containerd[1730]: time="2025-11-08T00:26:19.706205770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:19.716101 containerd[1730]: time="2025-11-08T00:26:19.706361872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:19.716101 containerd[1730]: time="2025-11-08T00:26:19.706382772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:19.716101 containerd[1730]: time="2025-11-08T00:26:19.706609575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:19.738262 systemd[1]: Started cri-containerd-0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401.scope - libcontainer container 0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401. Nov 8 00:26:19.749754 containerd[1730]: time="2025-11-08T00:26:19.749665859Z" level=info msg="CreateContainer within sandbox \"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9b8b1ad2c688af96cfdeef56df8414ac7fca8ef6e9815dad745ae0df3358f7c2\"" Nov 8 00:26:19.751809 containerd[1730]: time="2025-11-08T00:26:19.750768974Z" level=info msg="StartContainer for \"9b8b1ad2c688af96cfdeef56df8414ac7fca8ef6e9815dad745ae0df3358f7c2\"" Nov 8 00:26:19.793897 systemd[1]: Started cri-containerd-9b8b1ad2c688af96cfdeef56df8414ac7fca8ef6e9815dad745ae0df3358f7c2.scope - libcontainer container 9b8b1ad2c688af96cfdeef56df8414ac7fca8ef6e9815dad745ae0df3358f7c2. Nov 8 00:26:19.816075 containerd[1730]: time="2025-11-08T00:26:19.816031258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kbfws,Uid:77eae253-1bce-4de5-8e9b-23a9c58b4ee0,Namespace:calico-system,Attempt:1,} returns sandbox id \"0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401\"" Nov 8 00:26:19.820965 containerd[1730]: time="2025-11-08T00:26:19.820830923Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:26:19.840450 containerd[1730]: time="2025-11-08T00:26:19.840341887Z" level=info msg="StartContainer for \"9b8b1ad2c688af96cfdeef56df8414ac7fca8ef6e9815dad745ae0df3358f7c2\" returns successfully" Nov 8 00:26:19.958077 systemd-networkd[1352]: cali4856ed55d09: Gained IPv6LL Nov 8 00:26:20.078010 containerd[1730]: time="2025-11-08T00:26:20.077611502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:20.081177 containerd[1730]: time="2025-11-08T00:26:20.081121349Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:26:20.081303 containerd[1730]: time="2025-11-08T00:26:20.081230351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:26:20.083468 kubelet[3231]: E1108 00:26:20.081906 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:20.083468 kubelet[3231]: E1108 00:26:20.081966 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:20.083468 kubelet[3231]: E1108 00:26:20.082077 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:20.085050 containerd[1730]: time="2025-11-08T00:26:20.084914701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:26:20.130583 containerd[1730]: time="2025-11-08T00:26:20.130521219Z" level=info msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.192 [INFO][5004] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.192 [INFO][5004] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" iface="eth0" netns="/var/run/netns/cni-45d4fe48-bbfb-d330-bb52-78502ef6b45d" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.193 [INFO][5004] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" iface="eth0" netns="/var/run/netns/cni-45d4fe48-bbfb-d330-bb52-78502ef6b45d" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.193 [INFO][5004] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" iface="eth0" netns="/var/run/netns/cni-45d4fe48-bbfb-d330-bb52-78502ef6b45d" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.193 [INFO][5004] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.193 [INFO][5004] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.215 [INFO][5011] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.215 [INFO][5011] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.215 [INFO][5011] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.223 [WARNING][5011] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.223 [INFO][5011] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.224 [INFO][5011] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:20.227177 containerd[1730]: 2025-11-08 00:26:20.225 [INFO][5004] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:20.227991 containerd[1730]: time="2025-11-08T00:26:20.227952139Z" level=info msg="TearDown network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" successfully" Nov 8 00:26:20.227991 containerd[1730]: time="2025-11-08T00:26:20.227988339Z" level=info msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" returns successfully" Nov 8 00:26:20.233647 containerd[1730]: time="2025-11-08T00:26:20.233601015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sn8mq,Uid:fc5e470f-14b3-444c-ac8a-3efb084d3809,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:20.332900 containerd[1730]: time="2025-11-08T00:26:20.331741345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:20.337248 containerd[1730]: time="2025-11-08T00:26:20.337195919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:26:20.337533 containerd[1730]: time="2025-11-08T00:26:20.337459823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:26:20.338341 kubelet[3231]: E1108 00:26:20.337743 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:20.338341 kubelet[3231]: E1108 00:26:20.337796 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:20.338341 kubelet[3231]: E1108 00:26:20.337897 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:20.338551 kubelet[3231]: E1108 00:26:20.337956 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:20.375139 kubelet[3231]: E1108 00:26:20.375065 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:20.380444 systemd-networkd[1352]: calid010458482a: Link UP Nov 8 00:26:20.381749 systemd-networkd[1352]: calid010458482a: Gained carrier Nov 8 00:26:20.407144 kubelet[3231]: I1108 00:26:20.404695 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llsdc" podStartSLOduration=43.404674233 podStartE2EDuration="43.404674233s" podCreationTimestamp="2025-11-08 00:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:20.401586091 +0000 UTC m=+49.986126291" watchObservedRunningTime="2025-11-08 00:26:20.404674233 +0000 UTC m=+49.989214333" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.304 [INFO][5019] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0 goldmane-7c778bb748- calico-system fc5e470f-14b3-444c-ac8a-3efb084d3809 946 0 2025-11-08 00:25:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae goldmane-7c778bb748-sn8mq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid010458482a [] [] }} ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.304 [INFO][5019] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.329 [INFO][5031] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" HandleID="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.329 [INFO][5031] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" HandleID="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f5d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"goldmane-7c778bb748-sn8mq", "timestamp":"2025-11-08 00:26:20.329689817 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.329 [INFO][5031] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.329 [INFO][5031] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.329 [INFO][5031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.337 [INFO][5031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.343 [INFO][5031] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.350 [INFO][5031] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.351 [INFO][5031] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.353 [INFO][5031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.353 [INFO][5031] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.354 [INFO][5031] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7 Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.358 [INFO][5031] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.367 [INFO][5031] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.68/26] block=192.168.59.64/26 handle="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.367 [INFO][5031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.68/26] handle="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.367 [INFO][5031] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:20.410550 containerd[1730]: 2025-11-08 00:26:20.367 [INFO][5031] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.68/26] IPv6=[] ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" HandleID="k8s-pod-network.d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.373 [INFO][5019] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fc5e470f-14b3-444c-ac8a-3efb084d3809", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"goldmane-7c778bb748-sn8mq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid010458482a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.375 [INFO][5019] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.68/32] ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.375 [INFO][5019] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid010458482a ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.381 [INFO][5019] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.382 [INFO][5019] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fc5e470f-14b3-444c-ac8a-3efb084d3809", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7", Pod:"goldmane-7c778bb748-sn8mq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid010458482a", MAC:"da:36:d2:e8:47:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:20.411654 containerd[1730]: 2025-11-08 00:26:20.405 [INFO][5019] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7" Namespace="calico-system" Pod="goldmane-7c778bb748-sn8mq" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:20.453880 containerd[1730]: time="2025-11-08T00:26:20.453330492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:20.453880 containerd[1730]: time="2025-11-08T00:26:20.453413094Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:20.453880 containerd[1730]: time="2025-11-08T00:26:20.453430594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:20.453880 containerd[1730]: time="2025-11-08T00:26:20.453513895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:20.487083 systemd[1]: Started cri-containerd-d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7.scope - libcontainer container d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7. Nov 8 00:26:20.533652 containerd[1730]: time="2025-11-08T00:26:20.533540779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-sn8mq,Uid:fc5e470f-14b3-444c-ac8a-3efb084d3809,Namespace:calico-system,Attempt:1,} returns sandbox id \"d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7\"" Nov 8 00:26:20.534262 systemd-networkd[1352]: vxlan.calico: Gained IPv6LL Nov 8 00:26:20.536852 containerd[1730]: time="2025-11-08T00:26:20.536511419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:20.676223 systemd[1]: run-netns-cni\x2d45d4fe48\x2dbbfb\x2dd330\x2dbb52\x2d78502ef6b45d.mount: Deactivated successfully. Nov 8 00:26:20.789751 containerd[1730]: time="2025-11-08T00:26:20.789679849Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:20.792995 containerd[1730]: time="2025-11-08T00:26:20.792903493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:20.792995 containerd[1730]: time="2025-11-08T00:26:20.792943394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:20.793214 kubelet[3231]: E1108 00:26:20.793166 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:20.793304 kubelet[3231]: E1108 00:26:20.793222 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:20.793763 kubelet[3231]: E1108 00:26:20.793335 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:20.793763 kubelet[3231]: E1108 00:26:20.793376 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:21.046002 systemd-networkd[1352]: cali289bc357945: Gained IPv6LL Nov 8 00:26:21.130999 containerd[1730]: time="2025-11-08T00:26:21.130464467Z" level=info msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" Nov 8 00:26:21.131340 containerd[1730]: time="2025-11-08T00:26:21.130464767Z" level=info msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.195 [INFO][5109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.196 [INFO][5109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" iface="eth0" netns="/var/run/netns/cni-eadc480e-b34a-0b5f-8073-96ed314d99be" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.196 [INFO][5109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" iface="eth0" netns="/var/run/netns/cni-eadc480e-b34a-0b5f-8073-96ed314d99be" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.197 [INFO][5109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" iface="eth0" netns="/var/run/netns/cni-eadc480e-b34a-0b5f-8073-96ed314d99be" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.197 [INFO][5109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.197 [INFO][5109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.234 [INFO][5124] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.234 [INFO][5124] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.235 [INFO][5124] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.243 [WARNING][5124] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.243 [INFO][5124] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.244 [INFO][5124] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:21.247711 containerd[1730]: 2025-11-08 00:26:21.246 [INFO][5109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:21.251382 containerd[1730]: time="2025-11-08T00:26:21.250791297Z" level=info msg="TearDown network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" successfully" Nov 8 00:26:21.251382 containerd[1730]: time="2025-11-08T00:26:21.250831997Z" level=info msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" returns successfully" Nov 8 00:26:21.255583 systemd[1]: run-netns-cni\x2deadc480e\x2db34a\x2d0b5f\x2d8073\x2d96ed314d99be.mount: Deactivated successfully. Nov 8 00:26:21.258704 containerd[1730]: time="2025-11-08T00:26:21.258519102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d7bd9d8-kgsh2,Uid:928ddcfe-b055-4feb-bfb6-23dedc6fa744,Namespace:calico-system,Attempt:1,}" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.201 [INFO][5110] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.202 [INFO][5110] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" iface="eth0" netns="/var/run/netns/cni-09c15d18-336b-de65-1232-082418bc1189" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.203 [INFO][5110] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" iface="eth0" netns="/var/run/netns/cni-09c15d18-336b-de65-1232-082418bc1189" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.203 [INFO][5110] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" iface="eth0" netns="/var/run/netns/cni-09c15d18-336b-de65-1232-082418bc1189" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.203 [INFO][5110] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.203 [INFO][5110] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.242 [INFO][5126] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.242 [INFO][5126] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.244 [INFO][5126] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.255 [WARNING][5126] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.255 [INFO][5126] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.258 [INFO][5126] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:21.261665 containerd[1730]: 2025-11-08 00:26:21.260 [INFO][5110] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:21.263983 containerd[1730]: time="2025-11-08T00:26:21.262626557Z" level=info msg="TearDown network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" successfully" Nov 8 00:26:21.263983 containerd[1730]: time="2025-11-08T00:26:21.262660858Z" level=info msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" returns successfully" Nov 8 00:26:21.267190 systemd[1]: run-netns-cni\x2d09c15d18\x2d336b\x2dde65\x2d1232\x2d082418bc1189.mount: Deactivated successfully. Nov 8 00:26:21.274414 containerd[1730]: time="2025-11-08T00:26:21.274380616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-6grzk,Uid:0ce899e0-d12a-4abb-b40d-26c4cc149868,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:21.385904 kubelet[3231]: E1108 00:26:21.385774 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:21.400742 kubelet[3231]: E1108 00:26:21.398416 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:21.507914 systemd-networkd[1352]: cali415ca59f634: Link UP Nov 8 00:26:21.509058 systemd-networkd[1352]: cali415ca59f634: Gained carrier Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.374 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0 calico-apiserver-74448999c6- calico-apiserver 0ce899e0-d12a-4abb-b40d-26c4cc149868 973 0 2025-11-08 00:25:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74448999c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae calico-apiserver-74448999c6-6grzk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali415ca59f634 [] [] }} ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.374 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.450 [INFO][5167] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" HandleID="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.452 [INFO][5167] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" HandleID="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003338a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"calico-apiserver-74448999c6-6grzk", "timestamp":"2025-11-08 00:26:21.450133098 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.452 [INFO][5167] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.452 [INFO][5167] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.452 [INFO][5167] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.459 [INFO][5167] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.463 [INFO][5167] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.466 [INFO][5167] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.470 [INFO][5167] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.472 [INFO][5167] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.472 [INFO][5167] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.474 [INFO][5167] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.482 [INFO][5167] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.493 [INFO][5167] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.69/26] block=192.168.59.64/26 handle="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.493 [INFO][5167] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.69/26] handle="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.493 [INFO][5167] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:21.529391 containerd[1730]: 2025-11-08 00:26:21.493 [INFO][5167] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.69/26] IPv6=[] ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" HandleID="k8s-pod-network.1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.496 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ce899e0-d12a-4abb-b40d-26c4cc149868", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"calico-apiserver-74448999c6-6grzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali415ca59f634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.497 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.69/32] ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.497 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali415ca59f634 ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.509 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.510 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ce899e0-d12a-4abb-b40d-26c4cc149868", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da", Pod:"calico-apiserver-74448999c6-6grzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali415ca59f634", MAC:"d6:d3:4e:90:63:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:21.530342 containerd[1730]: 2025-11-08 00:26:21.527 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-6grzk" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:21.558118 systemd-networkd[1352]: cali2c3645d39d4: Gained IPv6LL Nov 8 00:26:21.564083 containerd[1730]: time="2025-11-08T00:26:21.563967540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:21.564629 containerd[1730]: time="2025-11-08T00:26:21.564534148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:21.565343 containerd[1730]: time="2025-11-08T00:26:21.564710050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:21.565434 containerd[1730]: time="2025-11-08T00:26:21.565318658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:21.592947 systemd[1]: Started cri-containerd-1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da.scope - libcontainer container 1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da. Nov 8 00:26:21.613518 systemd-networkd[1352]: calid46d428498a: Link UP Nov 8 00:26:21.615615 systemd-networkd[1352]: calid46d428498a: Gained carrier Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.368 [INFO][5138] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0 calico-kube-controllers-598d7bd9d8- calico-system 928ddcfe-b055-4feb-bfb6-23dedc6fa744 972 0 2025-11-08 00:25:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:598d7bd9d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae calico-kube-controllers-598d7bd9d8-kgsh2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid46d428498a [] [] }} ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.368 [INFO][5138] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.451 [INFO][5162] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" HandleID="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.453 [INFO][5162] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" HandleID="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"calico-kube-controllers-598d7bd9d8-kgsh2", "timestamp":"2025-11-08 00:26:21.451219912 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.454 [INFO][5162] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.494 [INFO][5162] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.494 [INFO][5162] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.561 [INFO][5162] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.568 [INFO][5162] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.573 [INFO][5162] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.576 [INFO][5162] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.582 [INFO][5162] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.582 [INFO][5162] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.585 [INFO][5162] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.591 [INFO][5162] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.603 [INFO][5162] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.70/26] block=192.168.59.64/26 handle="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.604 [INFO][5162] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.70/26] handle="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.605 [INFO][5162] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:21.633606 containerd[1730]: 2025-11-08 00:26:21.605 [INFO][5162] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.70/26] IPv6=[] ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" HandleID="k8s-pod-network.10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.608 [INFO][5138] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0", GenerateName:"calico-kube-controllers-598d7bd9d8-", Namespace:"calico-system", SelfLink:"", UID:"928ddcfe-b055-4feb-bfb6-23dedc6fa744", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d7bd9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"calico-kube-controllers-598d7bd9d8-kgsh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid46d428498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.608 [INFO][5138] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.70/32] ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.608 [INFO][5138] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid46d428498a ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.614 [INFO][5138] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.614 [INFO][5138] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0", GenerateName:"calico-kube-controllers-598d7bd9d8-", Namespace:"calico-system", SelfLink:"", UID:"928ddcfe-b055-4feb-bfb6-23dedc6fa744", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d7bd9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac", Pod:"calico-kube-controllers-598d7bd9d8-kgsh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid46d428498a", MAC:"2e:58:a6:ac:1b:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:21.634496 containerd[1730]: 2025-11-08 00:26:21.629 [INFO][5138] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac" Namespace="calico-system" Pod="calico-kube-controllers-598d7bd9d8-kgsh2" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:21.682826 containerd[1730]: time="2025-11-08T00:26:21.681314330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:21.682826 containerd[1730]: time="2025-11-08T00:26:21.681378131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:21.682826 containerd[1730]: time="2025-11-08T00:26:21.681402331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:21.682826 containerd[1730]: time="2025-11-08T00:26:21.681492532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:21.708092 containerd[1730]: time="2025-11-08T00:26:21.708047392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-6grzk,Uid:0ce899e0-d12a-4abb-b40d-26c4cc149868,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da\"" Nov 8 00:26:21.711023 containerd[1730]: time="2025-11-08T00:26:21.710853430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:21.727530 systemd[1]: run-containerd-runc-k8s.io-10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac-runc.lX9I4j.mount: Deactivated successfully. Nov 8 00:26:21.737063 systemd[1]: Started cri-containerd-10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac.scope - libcontainer container 10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac. Nov 8 00:26:21.776457 containerd[1730]: time="2025-11-08T00:26:21.776363118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-598d7bd9d8-kgsh2,Uid:928ddcfe-b055-4feb-bfb6-23dedc6fa744,Namespace:calico-system,Attempt:1,} returns sandbox id \"10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac\"" Nov 8 00:26:21.972525 containerd[1730]: time="2025-11-08T00:26:21.972389673Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:21.976179 containerd[1730]: time="2025-11-08T00:26:21.976085823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:21.976179 containerd[1730]: time="2025-11-08T00:26:21.976127224Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:21.976459 kubelet[3231]: E1108 00:26:21.976406 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:21.976582 kubelet[3231]: E1108 00:26:21.976488 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:21.977096 kubelet[3231]: E1108 00:26:21.976736 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:21.977096 kubelet[3231]: E1108 00:26:21.976790 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:21.978443 containerd[1730]: time="2025-11-08T00:26:21.976883734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:22.130202 containerd[1730]: time="2025-11-08T00:26:22.129760106Z" level=info msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" Nov 8 00:26:22.130788 containerd[1730]: time="2025-11-08T00:26:22.129799906Z" level=info msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" Nov 8 00:26:22.134154 systemd-networkd[1352]: calid010458482a: Gained IPv6LL Nov 8 00:26:22.215802 containerd[1730]: time="2025-11-08T00:26:22.215520067Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:22.219674 containerd[1730]: time="2025-11-08T00:26:22.219465321Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:22.219674 containerd[1730]: time="2025-11-08T00:26:22.219573122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:22.220702 kubelet[3231]: E1108 00:26:22.220051 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:22.222045 kubelet[3231]: E1108 00:26:22.220108 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:22.222045 kubelet[3231]: E1108 00:26:22.220974 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:22.222045 kubelet[3231]: E1108 00:26:22.221017 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.199 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.200 [INFO][5297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" iface="eth0" netns="/var/run/netns/cni-1b5218a0-1679-03d1-e530-08f39a3b9098" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.201 [INFO][5297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" iface="eth0" netns="/var/run/netns/cni-1b5218a0-1679-03d1-e530-08f39a3b9098" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.202 [INFO][5297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" iface="eth0" netns="/var/run/netns/cni-1b5218a0-1679-03d1-e530-08f39a3b9098" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.202 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.202 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.247 [INFO][5310] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.248 [INFO][5310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.248 [INFO][5310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.257 [WARNING][5310] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.260 [INFO][5310] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.268 [INFO][5310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:22.277219 containerd[1730]: 2025-11-08 00:26:22.273 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:22.281184 containerd[1730]: time="2025-11-08T00:26:22.279367233Z" level=info msg="TearDown network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" successfully" Nov 8 00:26:22.281184 containerd[1730]: time="2025-11-08T00:26:22.279502634Z" level=info msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" returns successfully" Nov 8 00:26:22.283675 systemd[1]: run-netns-cni\x2d1b5218a0\x2d1679\x2d03d1\x2de530\x2d08f39a3b9098.mount: Deactivated successfully. Nov 8 00:26:22.292169 containerd[1730]: time="2025-11-08T00:26:22.292135105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4hf,Uid:dd3dad8e-8761-486d-82e7-516eeba2f8a7,Namespace:kube-system,Attempt:1,}" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.224 [INFO][5296] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.226 [INFO][5296] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" iface="eth0" netns="/var/run/netns/cni-55e01589-89db-d87d-0474-54da60e4d74c" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.227 [INFO][5296] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" iface="eth0" netns="/var/run/netns/cni-55e01589-89db-d87d-0474-54da60e4d74c" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.228 [INFO][5296] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" iface="eth0" netns="/var/run/netns/cni-55e01589-89db-d87d-0474-54da60e4d74c" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.228 [INFO][5296] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.229 [INFO][5296] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.287 [INFO][5316] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.290 [INFO][5316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.290 [INFO][5316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.299 [WARNING][5316] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.299 [INFO][5316] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.301 [INFO][5316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:22.305168 containerd[1730]: 2025-11-08 00:26:22.302 [INFO][5296] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:22.308357 containerd[1730]: time="2025-11-08T00:26:22.305313184Z" level=info msg="TearDown network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" successfully" Nov 8 00:26:22.308357 containerd[1730]: time="2025-11-08T00:26:22.305363085Z" level=info msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" returns successfully" Nov 8 00:26:22.312112 containerd[1730]: time="2025-11-08T00:26:22.311768471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-ltcnv,Uid:136d7667-9127-4dfa-b5ce-1dde786b7211,Namespace:calico-apiserver,Attempt:1,}" Nov 8 00:26:22.386250 kubelet[3231]: E1108 00:26:22.386203 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:22.393411 kubelet[3231]: E1108 00:26:22.392789 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:22.394583 kubelet[3231]: E1108 00:26:22.393871 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:22.526951 systemd-networkd[1352]: cali00b3669591c: Link UP Nov 8 00:26:22.530820 systemd-networkd[1352]: cali00b3669591c: Gained carrier Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.395 [INFO][5324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0 coredns-66bc5c9577- kube-system dd3dad8e-8761-486d-82e7-516eeba2f8a7 998 0 2025-11-08 00:25:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae coredns-66bc5c9577-tn4hf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali00b3669591c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.395 [INFO][5324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.458 [INFO][5349] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" HandleID="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.458 [INFO][5349] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" HandleID="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f120), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"coredns-66bc5c9577-tn4hf", "timestamp":"2025-11-08 00:26:22.458042653 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.458 [INFO][5349] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.458 [INFO][5349] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.458 [INFO][5349] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.469 [INFO][5349] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.481 [INFO][5349] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.485 [INFO][5349] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.488 [INFO][5349] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.492 [INFO][5349] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.492 [INFO][5349] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.493 [INFO][5349] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.498 [INFO][5349] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.508 [INFO][5349] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.71/26] block=192.168.59.64/26 handle="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.508 [INFO][5349] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.71/26] handle="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.508 [INFO][5349] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:22.561949 containerd[1730]: 2025-11-08 00:26:22.509 [INFO][5349] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.71/26] IPv6=[] ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" HandleID="k8s-pod-network.7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.562885 containerd[1730]: 2025-11-08 00:26:22.516 [INFO][5324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dd3dad8e-8761-486d-82e7-516eeba2f8a7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"coredns-66bc5c9577-tn4hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00b3669591c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:22.562885 containerd[1730]: 2025-11-08 00:26:22.516 [INFO][5324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.71/32] ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.562885 containerd[1730]: 2025-11-08 00:26:22.516 [INFO][5324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00b3669591c ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.562885 containerd[1730]: 2025-11-08 00:26:22.532 [INFO][5324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.562885 containerd[1730]: 2025-11-08 00:26:22.532 [INFO][5324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dd3dad8e-8761-486d-82e7-516eeba2f8a7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b", Pod:"coredns-66bc5c9577-tn4hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00b3669591c", MAC:"16:ef:1e:1b:7f:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:22.563214 containerd[1730]: 2025-11-08 00:26:22.558 [INFO][5324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b" Namespace="kube-system" Pod="coredns-66bc5c9577-tn4hf" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:22.621487 containerd[1730]: time="2025-11-08T00:26:22.618424226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:22.621487 containerd[1730]: time="2025-11-08T00:26:22.621281065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:22.621487 containerd[1730]: time="2025-11-08T00:26:22.621352366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:22.621964 containerd[1730]: time="2025-11-08T00:26:22.621769472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:22.656229 systemd[1]: Started cri-containerd-7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b.scope - libcontainer container 7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b. Nov 8 00:26:22.689116 systemd[1]: run-netns-cni\x2d55e01589\x2d89db\x2dd87d\x2d0474\x2d54da60e4d74c.mount: Deactivated successfully. Nov 8 00:26:22.716241 systemd-networkd[1352]: calid46d428498a: Gained IPv6LL Nov 8 00:26:22.721970 systemd-networkd[1352]: cali9674754da9e: Link UP Nov 8 00:26:22.724665 systemd-networkd[1352]: cali9674754da9e: Gained carrier Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.444 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0 calico-apiserver-74448999c6- calico-apiserver 136d7667-9127-4dfa-b5ce-1dde786b7211 999 0 2025-11-08 00:25:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74448999c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2742f1d4ae calico-apiserver-74448999c6-ltcnv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9674754da9e [] [] }} ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.444 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.496 [INFO][5357] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" HandleID="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.497 [INFO][5357] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" HandleID="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2742f1d4ae", "pod":"calico-apiserver-74448999c6-ltcnv", "timestamp":"2025-11-08 00:26:22.49688858 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2742f1d4ae", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.497 [INFO][5357] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.508 [INFO][5357] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.509 [INFO][5357] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2742f1d4ae' Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.583 [INFO][5357] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.611 [INFO][5357] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.633 [INFO][5357] ipam/ipam.go 511: Trying affinity for 192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.650 [INFO][5357] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.658 [INFO][5357] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.64/26 host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.658 [INFO][5357] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.64/26 handle="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.660 [INFO][5357] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.672 [INFO][5357] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.64/26 handle="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.706 [INFO][5357] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.72/26] block=192.168.59.64/26 handle="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.706 [INFO][5357] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.72/26] handle="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" host="ci-4081.3.6-n-2742f1d4ae" Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.706 [INFO][5357] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:22.748977 containerd[1730]: 2025-11-08 00:26:22.706 [INFO][5357] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.72/26] IPv6=[] ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" HandleID="k8s-pod-network.3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.708 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"136d7667-9127-4dfa-b5ce-1dde786b7211", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"", Pod:"calico-apiserver-74448999c6-ltcnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9674754da9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.708 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.72/32] ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.708 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9674754da9e ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.722 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.722 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"136d7667-9127-4dfa-b5ce-1dde786b7211", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a", Pod:"calico-apiserver-74448999c6-ltcnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9674754da9e", MAC:"aa:a4:e6:36:99:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:22.749881 containerd[1730]: 2025-11-08 00:26:22.747 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a" Namespace="calico-apiserver" Pod="calico-apiserver-74448999c6-ltcnv" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:22.793991 containerd[1730]: time="2025-11-08T00:26:22.793952204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tn4hf,Uid:dd3dad8e-8761-486d-82e7-516eeba2f8a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b\"" Nov 8 00:26:22.797996 containerd[1730]: time="2025-11-08T00:26:22.797139948Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 8 00:26:22.797996 containerd[1730]: time="2025-11-08T00:26:22.797218949Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 8 00:26:22.797996 containerd[1730]: time="2025-11-08T00:26:22.797238449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:22.797996 containerd[1730]: time="2025-11-08T00:26:22.797336250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 8 00:26:22.810569 containerd[1730]: time="2025-11-08T00:26:22.808981408Z" level=info msg="CreateContainer within sandbox \"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:26:22.838024 systemd-networkd[1352]: cali415ca59f634: Gained IPv6LL Nov 8 00:26:22.845910 systemd[1]: Started cri-containerd-3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a.scope - libcontainer container 3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a. Nov 8 00:26:22.865411 containerd[1730]: time="2025-11-08T00:26:22.865365672Z" level=info msg="CreateContainer within sandbox \"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b743ccf08443c1942bd0d1bdb1097b0a83ceea079a7993b756f3869bd51546c4\"" Nov 8 00:26:22.867433 containerd[1730]: time="2025-11-08T00:26:22.867396399Z" level=info msg="StartContainer for \"b743ccf08443c1942bd0d1bdb1097b0a83ceea079a7993b756f3869bd51546c4\"" Nov 8 00:26:22.917871 systemd[1]: Started cri-containerd-b743ccf08443c1942bd0d1bdb1097b0a83ceea079a7993b756f3869bd51546c4.scope - libcontainer container b743ccf08443c1942bd0d1bdb1097b0a83ceea079a7993b756f3869bd51546c4. Nov 8 00:26:22.961456 containerd[1730]: time="2025-11-08T00:26:22.961313072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74448999c6-ltcnv,Uid:136d7667-9127-4dfa-b5ce-1dde786b7211,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a\"" Nov 8 00:26:22.961456 containerd[1730]: time="2025-11-08T00:26:22.961337472Z" level=info msg="StartContainer for \"b743ccf08443c1942bd0d1bdb1097b0a83ceea079a7993b756f3869bd51546c4\" returns successfully" Nov 8 00:26:22.965973 containerd[1730]: time="2025-11-08T00:26:22.965939535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:23.234942 containerd[1730]: time="2025-11-08T00:26:23.234793077Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:23.238084 containerd[1730]: time="2025-11-08T00:26:23.237949020Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:23.238084 containerd[1730]: time="2025-11-08T00:26:23.237997321Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:23.238297 kubelet[3231]: E1108 00:26:23.238251 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:23.238365 kubelet[3231]: E1108 00:26:23.238325 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:23.238465 kubelet[3231]: E1108 00:26:23.238439 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:23.238524 kubelet[3231]: E1108 00:26:23.238504 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:23.395368 kubelet[3231]: E1108 00:26:23.395150 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:23.399370 kubelet[3231]: E1108 00:26:23.399189 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:23.399370 kubelet[3231]: E1108 00:26:23.399264 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:23.442476 kubelet[3231]: I1108 00:26:23.442407 3231 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tn4hf" podStartSLOduration=46.44238839 podStartE2EDuration="46.44238839s" podCreationTimestamp="2025-11-08 00:25:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:26:23.441894983 +0000 UTC m=+53.026435183" watchObservedRunningTime="2025-11-08 00:26:23.44238839 +0000 UTC m=+53.026928490" Nov 8 00:26:24.117911 systemd-networkd[1352]: cali9674754da9e: Gained IPv6LL Nov 8 00:26:24.309970 systemd-networkd[1352]: cali00b3669591c: Gained IPv6LL Nov 8 00:26:24.402965 kubelet[3231]: E1108 00:26:24.402829 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:31.108404 containerd[1730]: time="2025-11-08T00:26:31.108358782Z" level=info msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" Nov 8 00:26:31.136344 containerd[1730]: time="2025-11-08T00:26:31.135289782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.146 [WARNING][5529] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.146 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.146 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" iface="eth0" netns="" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.146 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.146 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.175 [INFO][5538] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.175 [INFO][5538] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.175 [INFO][5538] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.183 [WARNING][5538] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.183 [INFO][5538] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.184 [INFO][5538] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.187074 containerd[1730]: 2025-11-08 00:26:31.186 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.187924 containerd[1730]: time="2025-11-08T00:26:31.187110851Z" level=info msg="TearDown network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" successfully" Nov 8 00:26:31.187924 containerd[1730]: time="2025-11-08T00:26:31.187142452Z" level=info msg="StopPodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" returns successfully" Nov 8 00:26:31.187924 containerd[1730]: time="2025-11-08T00:26:31.187822762Z" level=info msg="RemovePodSandbox for \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" Nov 8 00:26:31.187924 containerd[1730]: time="2025-11-08T00:26:31.187860562Z" level=info msg="Forcibly stopping sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\"" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.224 [WARNING][5553] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" WorkloadEndpoint="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.224 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.224 [INFO][5553] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" iface="eth0" netns="" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.224 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.224 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.246 [INFO][5560] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.246 [INFO][5560] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.246 [INFO][5560] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.251 [WARNING][5560] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.251 [INFO][5560] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" HandleID="k8s-pod-network.1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-whisker--57fdf8d985--zxfxb-eth0" Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.253 [INFO][5560] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.255596 containerd[1730]: 2025-11-08 00:26:31.254 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54" Nov 8 00:26:31.256369 containerd[1730]: time="2025-11-08T00:26:31.255629568Z" level=info msg="TearDown network for sandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" successfully" Nov 8 00:26:31.267200 containerd[1730]: time="2025-11-08T00:26:31.267156940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.267335 containerd[1730]: time="2025-11-08T00:26:31.267232241Z" level=info msg="RemovePodSandbox \"1ee034a28c6a4d00732e265ee378ec0d3c71c300ba675fdf610adb34eee49f54\" returns successfully" Nov 8 00:26:31.268098 containerd[1730]: time="2025-11-08T00:26:31.268066953Z" level=info msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.300 [WARNING][5574] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1", Pod:"coredns-66bc5c9577-llsdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3645d39d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.301 [INFO][5574] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.301 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" iface="eth0" netns="" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.301 [INFO][5574] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.301 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.321 [INFO][5582] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.321 [INFO][5582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.321 [INFO][5582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.327 [WARNING][5582] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.327 [INFO][5582] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.328 [INFO][5582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.331030 containerd[1730]: 2025-11-08 00:26:31.329 [INFO][5574] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.331030 containerd[1730]: time="2025-11-08T00:26:31.330884586Z" level=info msg="TearDown network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" successfully" Nov 8 00:26:31.331030 containerd[1730]: time="2025-11-08T00:26:31.330917786Z" level=info msg="StopPodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" returns successfully" Nov 8 00:26:31.333315 containerd[1730]: time="2025-11-08T00:26:31.333261121Z" level=info msg="RemovePodSandbox for \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" Nov 8 00:26:31.333414 containerd[1730]: time="2025-11-08T00:26:31.333325322Z" level=info msg="Forcibly stopping sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\"" Nov 8 00:26:31.384690 containerd[1730]: time="2025-11-08T00:26:31.384404380Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:31.388455 containerd[1730]: time="2025-11-08T00:26:31.388304238Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:31.388455 containerd[1730]: time="2025-11-08T00:26:31.388401640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:31.389107 kubelet[3231]: E1108 00:26:31.388800 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:31.389107 kubelet[3231]: E1108 00:26:31.388858 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:31.389107 kubelet[3231]: E1108 00:26:31.388948 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:31.392389 containerd[1730]: time="2025-11-08T00:26:31.391938992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.377 [WARNING][5597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a589e2c8-4bc2-4178-8ecb-3723aaa6f7a2", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"fe5aff26b536cfb0b288da1c697b720a9a8e1d38bae01367ae07f36e13991ca1", Pod:"coredns-66bc5c9577-llsdc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3645d39d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.378 [INFO][5597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.378 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" iface="eth0" netns="" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.378 [INFO][5597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.378 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.405 [INFO][5606] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.405 [INFO][5606] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.405 [INFO][5606] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.413 [WARNING][5606] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.413 [INFO][5606] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" HandleID="k8s-pod-network.2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--llsdc-eth0" Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.417 [INFO][5606] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.420071 containerd[1730]: 2025-11-08 00:26:31.418 [INFO][5597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44" Nov 8 00:26:31.420845 containerd[1730]: time="2025-11-08T00:26:31.420237012Z" level=info msg="TearDown network for sandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" successfully" Nov 8 00:26:31.428080 containerd[1730]: time="2025-11-08T00:26:31.428039828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.428197 containerd[1730]: time="2025-11-08T00:26:31.428097029Z" level=info msg="RemovePodSandbox \"2109cce74356fac92d18e5459bd731b952effcaede260b4ff837ae3e90e6af44\" returns successfully" Nov 8 00:26:31.428444 containerd[1730]: time="2025-11-08T00:26:31.428418934Z" level=info msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.460 [WARNING][5620] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0", GenerateName:"calico-kube-controllers-598d7bd9d8-", Namespace:"calico-system", SelfLink:"", UID:"928ddcfe-b055-4feb-bfb6-23dedc6fa744", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d7bd9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac", Pod:"calico-kube-controllers-598d7bd9d8-kgsh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid46d428498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.461 [INFO][5620] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.461 [INFO][5620] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" iface="eth0" netns="" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.461 [INFO][5620] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.461 [INFO][5620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.481 [INFO][5627] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.481 [INFO][5627] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.481 [INFO][5627] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.487 [WARNING][5627] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.487 [INFO][5627] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.488 [INFO][5627] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.491486 containerd[1730]: 2025-11-08 00:26:31.490 [INFO][5620] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.492361 containerd[1730]: time="2025-11-08T00:26:31.491523904Z" level=info msg="TearDown network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" successfully" Nov 8 00:26:31.492361 containerd[1730]: time="2025-11-08T00:26:31.491555305Z" level=info msg="StopPodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" returns successfully" Nov 8 00:26:31.492361 containerd[1730]: time="2025-11-08T00:26:31.492124314Z" level=info msg="RemovePodSandbox for \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" Nov 8 00:26:31.492361 containerd[1730]: time="2025-11-08T00:26:31.492158414Z" level=info msg="Forcibly stopping sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\"" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.524 [WARNING][5641] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0", GenerateName:"calico-kube-controllers-598d7bd9d8-", Namespace:"calico-system", SelfLink:"", UID:"928ddcfe-b055-4feb-bfb6-23dedc6fa744", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"598d7bd9d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"10f7f10bab10eee182b2f0dedc01160e729a39e89aa6f92eb6ce960f534ce0ac", Pod:"calico-kube-controllers-598d7bd9d8-kgsh2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid46d428498a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.524 [INFO][5641] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.524 [INFO][5641] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" iface="eth0" netns="" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.524 [INFO][5641] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.524 [INFO][5641] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.544 [INFO][5648] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.544 [INFO][5648] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.544 [INFO][5648] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.550 [WARNING][5648] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.550 [INFO][5648] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" HandleID="k8s-pod-network.a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--kube--controllers--598d7bd9d8--kgsh2-eth0" Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.551 [INFO][5648] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.553691 containerd[1730]: 2025-11-08 00:26:31.552 [INFO][5641] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762" Nov 8 00:26:31.554356 containerd[1730]: time="2025-11-08T00:26:31.553831185Z" level=info msg="TearDown network for sandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" successfully" Nov 8 00:26:31.561619 containerd[1730]: time="2025-11-08T00:26:31.561577207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.562021 containerd[1730]: time="2025-11-08T00:26:31.561639808Z" level=info msg="RemovePodSandbox \"a6154fdd1bfb4525338935e6aab6c3140468415869f1fa1698df4927abb72762\" returns successfully" Nov 8 00:26:31.562497 containerd[1730]: time="2025-11-08T00:26:31.562468221Z" level=info msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.594 [WARNING][5662] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"136d7667-9127-4dfa-b5ce-1dde786b7211", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a", Pod:"calico-apiserver-74448999c6-ltcnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9674754da9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.594 [INFO][5662] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.594 [INFO][5662] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" iface="eth0" netns="" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.594 [INFO][5662] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.594 [INFO][5662] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.618 [INFO][5670] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.618 [INFO][5670] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.618 [INFO][5670] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.624 [WARNING][5670] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.624 [INFO][5670] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.625 [INFO][5670] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.628140 containerd[1730]: 2025-11-08 00:26:31.626 [INFO][5662] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.628140 containerd[1730]: time="2025-11-08T00:26:31.628002752Z" level=info msg="TearDown network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" successfully" Nov 8 00:26:31.628140 containerd[1730]: time="2025-11-08T00:26:31.628025252Z" level=info msg="StopPodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" returns successfully" Nov 8 00:26:31.628881 containerd[1730]: time="2025-11-08T00:26:31.628575461Z" level=info msg="RemovePodSandbox for \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" Nov 8 00:26:31.628881 containerd[1730]: time="2025-11-08T00:26:31.628609562Z" level=info msg="Forcibly stopping sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\"" Nov 8 00:26:31.640837 containerd[1730]: time="2025-11-08T00:26:31.640685652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:31.657641 containerd[1730]: time="2025-11-08T00:26:31.657442015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:31.657641 containerd[1730]: time="2025-11-08T00:26:31.657522617Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:31.658403 kubelet[3231]: E1108 00:26:31.658360 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:31.658532 kubelet[3231]: E1108 00:26:31.658409 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:31.658532 kubelet[3231]: E1108 00:26:31.658515 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:31.658647 kubelet[3231]: E1108 00:26:31.658609 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.664 [WARNING][5684] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"136d7667-9127-4dfa-b5ce-1dde786b7211", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"3153d1351e920a2c567c650d4c9e9f1b035a2f11594779bab9c11210ded2f79a", Pod:"calico-apiserver-74448999c6-ltcnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9674754da9e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.665 [INFO][5684] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.665 [INFO][5684] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" iface="eth0" netns="" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.666 [INFO][5684] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.666 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.688 [INFO][5691] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.688 [INFO][5691] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.688 [INFO][5691] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.696 [WARNING][5691] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.696 [INFO][5691] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" HandleID="k8s-pod-network.81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--ltcnv-eth0" Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.697 [INFO][5691] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.701154 containerd[1730]: 2025-11-08 00:26:31.698 [INFO][5684] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279" Nov 8 00:26:31.701154 containerd[1730]: time="2025-11-08T00:26:31.699853383Z" level=info msg="TearDown network for sandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" successfully" Nov 8 00:26:31.706644 containerd[1730]: time="2025-11-08T00:26:31.706600089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.706770 containerd[1730]: time="2025-11-08T00:26:31.706664090Z" level=info msg="RemovePodSandbox \"81c6f0b1f3cf2fdb4d5e86b0a27096cf13824188394f9f67c3cf9b8941df1279\" returns successfully" Nov 8 00:26:31.707485 containerd[1730]: time="2025-11-08T00:26:31.707180398Z" level=info msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.737 [WARNING][5705] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77eae253-1bce-4de5-8e9b-23a9c58b4ee0", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401", Pod:"csi-node-driver-kbfws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali289bc357945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.738 [INFO][5705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.738 [INFO][5705] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" iface="eth0" netns="" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.738 [INFO][5705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.738 [INFO][5705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.759 [INFO][5712] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.759 [INFO][5712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.759 [INFO][5712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.766 [WARNING][5712] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.766 [INFO][5712] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.767 [INFO][5712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.769802 containerd[1730]: 2025-11-08 00:26:31.768 [INFO][5705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.770666 containerd[1730]: time="2025-11-08T00:26:31.769848584Z" level=info msg="TearDown network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" successfully" Nov 8 00:26:31.770666 containerd[1730]: time="2025-11-08T00:26:31.769879385Z" level=info msg="StopPodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" returns successfully" Nov 8 00:26:31.770666 containerd[1730]: time="2025-11-08T00:26:31.770465694Z" level=info msg="RemovePodSandbox for \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" Nov 8 00:26:31.770666 containerd[1730]: time="2025-11-08T00:26:31.770502095Z" level=info msg="Forcibly stopping sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\"" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.804 [WARNING][5727] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77eae253-1bce-4de5-8e9b-23a9c58b4ee0", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"0b072f342977ba4ebf740a756b2f59bbe5b95280e0e65422629e686272c58401", Pod:"csi-node-driver-kbfws", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali289bc357945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.804 [INFO][5727] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.804 [INFO][5727] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" iface="eth0" netns="" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.804 [INFO][5727] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.804 [INFO][5727] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.827 [INFO][5734] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.828 [INFO][5734] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.828 [INFO][5734] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.833 [WARNING][5734] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.833 [INFO][5734] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" HandleID="k8s-pod-network.fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-csi--node--driver--kbfws-eth0" Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.834 [INFO][5734] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.836943 containerd[1730]: 2025-11-08 00:26:31.835 [INFO][5727] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5" Nov 8 00:26:31.837592 containerd[1730]: time="2025-11-08T00:26:31.836987641Z" level=info msg="TearDown network for sandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" successfully" Nov 8 00:26:31.845349 containerd[1730]: time="2025-11-08T00:26:31.845308072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.845470 containerd[1730]: time="2025-11-08T00:26:31.845370673Z" level=info msg="RemovePodSandbox \"fcbd33ff53ff763c90bcec45fadd58d31313cbfcb12a37845eb70af78ac326a5\" returns successfully" Nov 8 00:26:31.846287 containerd[1730]: time="2025-11-08T00:26:31.845948182Z" level=info msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.880 [WARNING][5748] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fc5e470f-14b3-444c-ac8a-3efb084d3809", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7", Pod:"goldmane-7c778bb748-sn8mq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid010458482a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.880 [INFO][5748] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.880 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" iface="eth0" netns="" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.880 [INFO][5748] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.880 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.899 [INFO][5755] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.899 [INFO][5755] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.900 [INFO][5755] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.906 [WARNING][5755] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.906 [INFO][5755] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.908 [INFO][5755] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.912490 containerd[1730]: 2025-11-08 00:26:31.911 [INFO][5748] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.913897 containerd[1730]: time="2025-11-08T00:26:31.912808734Z" level=info msg="TearDown network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" successfully" Nov 8 00:26:31.913897 containerd[1730]: time="2025-11-08T00:26:31.912844335Z" level=info msg="StopPodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" returns successfully" Nov 8 00:26:31.915748 containerd[1730]: time="2025-11-08T00:26:31.914401359Z" level=info msg="RemovePodSandbox for \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" Nov 8 00:26:31.915748 containerd[1730]: time="2025-11-08T00:26:31.914438260Z" level=info msg="Forcibly stopping sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\"" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.950 [WARNING][5769] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"fc5e470f-14b3-444c-ac8a-3efb084d3809", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"d3c7fc4342dfa79a90590e5c569b4fa35880585384f081b87ca8768520a0c7c7", Pod:"goldmane-7c778bb748-sn8mq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid010458482a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.950 [INFO][5769] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.950 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" iface="eth0" netns="" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.950 [INFO][5769] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.950 [INFO][5769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.972 [INFO][5777] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.972 [INFO][5777] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.972 [INFO][5777] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.979 [WARNING][5777] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.979 [INFO][5777] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" HandleID="k8s-pod-network.53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-goldmane--7c778bb748--sn8mq-eth0" Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.981 [INFO][5777] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:31.985461 containerd[1730]: 2025-11-08 00:26:31.983 [INFO][5769] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591" Nov 8 00:26:31.986182 containerd[1730]: time="2025-11-08T00:26:31.985525179Z" level=info msg="TearDown network for sandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" successfully" Nov 8 00:26:31.993974 containerd[1730]: time="2025-11-08T00:26:31.993800709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:31.993974 containerd[1730]: time="2025-11-08T00:26:31.993873610Z" level=info msg="RemovePodSandbox \"53ab7e86b3c5166fb4c70f75e28c9c8e4a298c810e755918a214696c2f251591\" returns successfully" Nov 8 00:26:31.994757 containerd[1730]: time="2025-11-08T00:26:31.994470819Z" level=info msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.026 [WARNING][5792] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ce899e0-d12a-4abb-b40d-26c4cc149868", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da", Pod:"calico-apiserver-74448999c6-6grzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali415ca59f634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.026 [INFO][5792] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.026 [INFO][5792] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" iface="eth0" netns="" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.026 [INFO][5792] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.026 [INFO][5792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.045 [INFO][5799] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.046 [INFO][5799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.046 [INFO][5799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.052 [WARNING][5799] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.052 [INFO][5799] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.053 [INFO][5799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:32.058584 containerd[1730]: 2025-11-08 00:26:32.054 [INFO][5792] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.060351 containerd[1730]: time="2025-11-08T00:26:32.059267939Z" level=info msg="TearDown network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" successfully" Nov 8 00:26:32.060351 containerd[1730]: time="2025-11-08T00:26:32.059319940Z" level=info msg="StopPodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" returns successfully" Nov 8 00:26:32.061041 containerd[1730]: time="2025-11-08T00:26:32.060925065Z" level=info msg="RemovePodSandbox for \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" Nov 8 00:26:32.061041 containerd[1730]: time="2025-11-08T00:26:32.060973566Z" level=info msg="Forcibly stopping sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\"" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.097 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0", GenerateName:"calico-apiserver-74448999c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"0ce899e0-d12a-4abb-b40d-26c4cc149868", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74448999c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"1a5639493f6f315aacf6a09e99d57671e89667730eb47f3f4db44ce089a6b9da", Pod:"calico-apiserver-74448999c6-6grzk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali415ca59f634", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.097 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.098 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" iface="eth0" netns="" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.098 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.098 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.117 [INFO][5821] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.117 [INFO][5821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.117 [INFO][5821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.123 [WARNING][5821] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.123 [INFO][5821] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" HandleID="k8s-pod-network.1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-calico--apiserver--74448999c6--6grzk-eth0" Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.124 [INFO][5821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:32.127840 containerd[1730]: 2025-11-08 00:26:32.125 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd" Nov 8 00:26:32.127840 containerd[1730]: time="2025-11-08T00:26:32.126805802Z" level=info msg="TearDown network for sandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" successfully" Nov 8 00:26:32.134738 containerd[1730]: time="2025-11-08T00:26:32.134689226Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:32.134852 containerd[1730]: time="2025-11-08T00:26:32.134765727Z" level=info msg="RemovePodSandbox \"1e1620c1e60a9a17c2b01692007f32db1407c6080222dc688b94497da19b19fd\" returns successfully" Nov 8 00:26:32.135347 containerd[1730]: time="2025-11-08T00:26:32.135318436Z" level=info msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.167 [WARNING][5835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dd3dad8e-8761-486d-82e7-516eeba2f8a7", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b", Pod:"coredns-66bc5c9577-tn4hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00b3669591c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.167 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.167 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" iface="eth0" netns="" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.167 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.167 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.187 [INFO][5842] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.187 [INFO][5842] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.187 [INFO][5842] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.193 [WARNING][5842] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.193 [INFO][5842] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.194 [INFO][5842] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:32.197346 containerd[1730]: 2025-11-08 00:26:32.196 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.197346 containerd[1730]: time="2025-11-08T00:26:32.197144109Z" level=info msg="TearDown network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" successfully" Nov 8 00:26:32.197346 containerd[1730]: time="2025-11-08T00:26:32.197168409Z" level=info msg="StopPodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" returns successfully" Nov 8 00:26:32.198075 containerd[1730]: time="2025-11-08T00:26:32.197766619Z" level=info msg="RemovePodSandbox for \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" Nov 8 00:26:32.198075 containerd[1730]: time="2025-11-08T00:26:32.197804519Z" level=info msg="Forcibly stopping sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\"" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.232 [WARNING][5856] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"dd3dad8e-8761-486d-82e7-516eeba2f8a7", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2025, time.November, 8, 0, 25, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2742f1d4ae", ContainerID:"7e5cdac77ecd039862578feecfeb1f270179d6124f22b404ce538db8c315ba9b", Pod:"coredns-66bc5c9577-tn4hf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali00b3669591c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.233 [INFO][5856] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.233 [INFO][5856] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" iface="eth0" netns="" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.233 [INFO][5856] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.233 [INFO][5856] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.257 [INFO][5863] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.257 [INFO][5863] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.257 [INFO][5863] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.263 [WARNING][5863] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.263 [INFO][5863] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" HandleID="k8s-pod-network.4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Workload="ci--4081.3.6--n--2742f1d4ae-k8s-coredns--66bc5c9577--tn4hf-eth0" Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.265 [INFO][5863] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 8 00:26:32.268748 containerd[1730]: 2025-11-08 00:26:32.266 [INFO][5856] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950" Nov 8 00:26:32.268748 containerd[1730]: time="2025-11-08T00:26:32.267515216Z" level=info msg="TearDown network for sandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" successfully" Nov 8 00:26:32.275638 containerd[1730]: time="2025-11-08T00:26:32.275594244Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 8 00:26:32.275774 containerd[1730]: time="2025-11-08T00:26:32.275664245Z" level=info msg="RemovePodSandbox \"4e65a30038a6a7526b4793cb73b70a33fcdf71b5630c6d2578799a6c95f4c950\" returns successfully" Nov 8 00:26:33.130837 containerd[1730]: time="2025-11-08T00:26:33.130530198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:33.381034 containerd[1730]: time="2025-11-08T00:26:33.380883538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:33.383797 containerd[1730]: time="2025-11-08T00:26:33.383738483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:33.383941 containerd[1730]: time="2025-11-08T00:26:33.383769884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:33.384174 kubelet[3231]: E1108 00:26:33.384129 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:33.384554 kubelet[3231]: E1108 00:26:33.384180 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:33.384554 kubelet[3231]: E1108 00:26:33.384280 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:33.384554 kubelet[3231]: E1108 00:26:33.384327 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:34.130894 containerd[1730]: time="2025-11-08T00:26:34.130576237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:26:34.390623 containerd[1730]: time="2025-11-08T00:26:34.390243823Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:34.393643 containerd[1730]: time="2025-11-08T00:26:34.393560575Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:26:34.393765 containerd[1730]: time="2025-11-08T00:26:34.393577976Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:34.394688 kubelet[3231]: E1108 00:26:34.394040 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:34.394688 kubelet[3231]: E1108 00:26:34.394099 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:26:34.394688 kubelet[3231]: E1108 00:26:34.394196 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:34.394688 kubelet[3231]: E1108 00:26:34.394240 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:36.130140 containerd[1730]: time="2025-11-08T00:26:36.129850300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:26:36.374973 containerd[1730]: time="2025-11-08T00:26:36.374910657Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:36.378606 containerd[1730]: time="2025-11-08T00:26:36.378555114Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:26:36.378766 containerd[1730]: time="2025-11-08T00:26:36.378651516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:26:36.379039 kubelet[3231]: E1108 00:26:36.378992 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:36.379444 kubelet[3231]: E1108 00:26:36.379050 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:26:36.379444 kubelet[3231]: E1108 00:26:36.379146 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:36.380606 containerd[1730]: time="2025-11-08T00:26:36.380494145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:26:36.617251 containerd[1730]: time="2025-11-08T00:26:36.617185770Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:36.620201 containerd[1730]: time="2025-11-08T00:26:36.620148816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:26:36.620303 containerd[1730]: time="2025-11-08T00:26:36.620235418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:26:36.620456 kubelet[3231]: E1108 00:26:36.620412 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:36.620595 kubelet[3231]: E1108 00:26:36.620466 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:26:36.620673 kubelet[3231]: E1108 00:26:36.620558 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:36.621146 kubelet[3231]: E1108 00:26:36.621098 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:37.131597 containerd[1730]: time="2025-11-08T00:26:37.131306361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:37.373051 containerd[1730]: time="2025-11-08T00:26:37.373001964Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:37.376118 containerd[1730]: time="2025-11-08T00:26:37.375994012Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:37.376118 containerd[1730]: time="2025-11-08T00:26:37.376042712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:37.376319 kubelet[3231]: E1108 00:26:37.376262 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:37.376397 kubelet[3231]: E1108 00:26:37.376321 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:37.376440 kubelet[3231]: E1108 00:26:37.376419 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:37.376503 kubelet[3231]: E1108 00:26:37.376464 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:38.130336 containerd[1730]: time="2025-11-08T00:26:38.130283882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:26:38.376174 containerd[1730]: time="2025-11-08T00:26:38.376121251Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:38.383353 containerd[1730]: time="2025-11-08T00:26:38.383227963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:26:38.383353 containerd[1730]: time="2025-11-08T00:26:38.383324264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:38.383707 kubelet[3231]: E1108 00:26:38.383504 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:38.383707 kubelet[3231]: E1108 00:26:38.383555 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:26:38.383707 kubelet[3231]: E1108 00:26:38.383644 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:38.383707 kubelet[3231]: E1108 00:26:38.383691 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:44.130640 kubelet[3231]: E1108 00:26:44.130579 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:26:46.131413 kubelet[3231]: E1108 00:26:46.131186 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:26:47.131484 kubelet[3231]: E1108 00:26:47.131426 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:26:50.131366 kubelet[3231]: E1108 00:26:50.130787 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:26:51.132550 kubelet[3231]: E1108 00:26:51.132486 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:26:52.130461 kubelet[3231]: E1108 00:26:52.129909 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:26:55.146823 containerd[1730]: time="2025-11-08T00:26:55.145639609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:26:55.419815 containerd[1730]: time="2025-11-08T00:26:55.418292513Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:55.422376 containerd[1730]: time="2025-11-08T00:26:55.422286872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:26:55.422673 containerd[1730]: time="2025-11-08T00:26:55.422336773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:26:55.423231 kubelet[3231]: E1108 00:26:55.422980 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:55.423231 kubelet[3231]: E1108 00:26:55.423044 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:26:55.423231 kubelet[3231]: E1108 00:26:55.423146 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:55.427280 containerd[1730]: time="2025-11-08T00:26:55.426075928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:26:55.689319 containerd[1730]: time="2025-11-08T00:26:55.681953537Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:55.689319 containerd[1730]: time="2025-11-08T00:26:55.684778772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:26:55.689319 containerd[1730]: time="2025-11-08T00:26:55.684877774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:26:55.689604 kubelet[3231]: E1108 00:26:55.685095 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:55.689604 kubelet[3231]: E1108 00:26:55.685147 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:26:55.689604 kubelet[3231]: E1108 00:26:55.685240 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:55.689793 kubelet[3231]: E1108 00:26:55.685289 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:26:58.132335 containerd[1730]: time="2025-11-08T00:26:58.132183471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:26:58.386657 containerd[1730]: time="2025-11-08T00:26:58.386507141Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:26:58.389769 containerd[1730]: time="2025-11-08T00:26:58.389651580Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:26:58.389769 containerd[1730]: time="2025-11-08T00:26:58.389701580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:26:58.389956 kubelet[3231]: E1108 00:26:58.389904 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:58.390320 kubelet[3231]: E1108 00:26:58.389956 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:26:58.390320 kubelet[3231]: E1108 00:26:58.390049 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:26:58.390320 kubelet[3231]: E1108 00:26:58.390091 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:27:00.131896 containerd[1730]: time="2025-11-08T00:27:00.131837891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:00.383353 containerd[1730]: time="2025-11-08T00:27:00.383195144Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:00.386370 containerd[1730]: time="2025-11-08T00:27:00.386317190Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:00.386493 containerd[1730]: time="2025-11-08T00:27:00.386347591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:00.387914 kubelet[3231]: E1108 00:27:00.387865 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:00.388305 kubelet[3231]: E1108 00:27:00.387925 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:00.388305 kubelet[3231]: E1108 00:27:00.388024 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:00.388305 kubelet[3231]: E1108 00:27:00.388068 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:27:02.132340 containerd[1730]: time="2025-11-08T00:27:02.131996974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:02.388830 containerd[1730]: time="2025-11-08T00:27:02.388648465Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:02.392916 containerd[1730]: time="2025-11-08T00:27:02.391914813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:02.392916 containerd[1730]: time="2025-11-08T00:27:02.392027715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:02.393099 kubelet[3231]: E1108 00:27:02.392271 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:02.393099 kubelet[3231]: E1108 00:27:02.392324 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:02.393099 kubelet[3231]: E1108 00:27:02.392416 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:02.393099 kubelet[3231]: E1108 00:27:02.392457 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:27:03.133687 containerd[1730]: time="2025-11-08T00:27:03.132857557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:03.394774 containerd[1730]: time="2025-11-08T00:27:03.394609423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:03.397921 containerd[1730]: time="2025-11-08T00:27:03.397691169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:03.397921 containerd[1730]: time="2025-11-08T00:27:03.397745270Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:03.398089 kubelet[3231]: E1108 00:27:03.397968 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:03.398089 kubelet[3231]: E1108 00:27:03.398025 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:03.399356 kubelet[3231]: E1108 00:27:03.398298 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:03.399356 kubelet[3231]: E1108 00:27:03.398342 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:27:03.399545 containerd[1730]: time="2025-11-08T00:27:03.398417680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:03.643763 containerd[1730]: time="2025-11-08T00:27:03.643550852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:03.646497 containerd[1730]: time="2025-11-08T00:27:03.646252791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:03.646497 containerd[1730]: time="2025-11-08T00:27:03.646358692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:03.647521 kubelet[3231]: E1108 00:27:03.646848 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:03.647521 kubelet[3231]: E1108 00:27:03.646907 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:03.647521 kubelet[3231]: E1108 00:27:03.646999 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:03.649128 containerd[1730]: time="2025-11-08T00:27:03.648892829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:03.908526 containerd[1730]: time="2025-11-08T00:27:03.908387279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:03.912598 containerd[1730]: time="2025-11-08T00:27:03.912445038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:03.912598 containerd[1730]: time="2025-11-08T00:27:03.912469438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:03.913418 kubelet[3231]: E1108 00:27:03.913121 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:03.913418 kubelet[3231]: E1108 00:27:03.913195 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:03.913418 kubelet[3231]: E1108 00:27:03.913286 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:03.914043 kubelet[3231]: E1108 00:27:03.913338 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:27:08.131234 kubelet[3231]: E1108 00:27:08.131145 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:27:11.135990 kubelet[3231]: E1108 00:27:11.135935 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:27:11.138738 kubelet[3231]: E1108 00:27:11.137600 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:27:17.133560 kubelet[3231]: E1108 00:27:17.133481 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:27:17.135006 kubelet[3231]: E1108 00:27:17.134887 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:27:17.398586 systemd[1]: run-containerd-runc-k8s.io-d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f-runc.I49FMv.mount: Deactivated successfully. Nov 8 00:27:17.422047 systemd[1]: Started sshd@7-10.200.8.42:22-10.200.16.10:43632.service - OpenSSH per-connection server daemon (10.200.16.10:43632). Nov 8 00:27:18.069685 sshd[5941]: Accepted publickey for core from 10.200.16.10 port 43632 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:18.072369 sshd[5941]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:18.081913 systemd-logind[1697]: New session 10 of user core. Nov 8 00:27:18.088187 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:27:18.131668 kubelet[3231]: E1108 00:27:18.131600 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:27:18.627434 sshd[5941]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:18.633963 systemd[1]: sshd@7-10.200.8.42:22-10.200.16.10:43632.service: Deactivated successfully. Nov 8 00:27:18.635406 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:27:18.638980 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:27:18.640664 systemd-logind[1697]: Removed session 10. Nov 8 00:27:23.133503 kubelet[3231]: E1108 00:27:23.132096 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:27:23.138657 kubelet[3231]: E1108 00:27:23.137948 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:27:23.741911 systemd[1]: Started sshd@8-10.200.8.42:22-10.200.16.10:37632.service - OpenSSH per-connection server daemon (10.200.16.10:37632). Nov 8 00:27:24.368430 sshd[5962]: Accepted publickey for core from 10.200.16.10 port 37632 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:24.371559 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:24.382957 systemd-logind[1697]: New session 11 of user core. Nov 8 00:27:24.385913 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:27:24.919566 sshd[5962]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:24.926078 systemd[1]: sshd@8-10.200.8.42:22-10.200.16.10:37632.service: Deactivated successfully. Nov 8 00:27:24.929647 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:27:24.932338 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:27:24.934762 systemd-logind[1697]: Removed session 11. Nov 8 00:27:25.131715 kubelet[3231]: E1108 00:27:25.131656 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:27:28.130797 kubelet[3231]: E1108 00:27:28.130468 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:27:29.132489 kubelet[3231]: E1108 00:27:29.132442 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:27:30.040137 systemd[1]: Started sshd@9-10.200.8.42:22-10.200.16.10:38392.service - OpenSSH per-connection server daemon (10.200.16.10:38392). Nov 8 00:27:30.132584 kubelet[3231]: E1108 00:27:30.132531 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:27:30.668315 sshd[5976]: Accepted publickey for core from 10.200.16.10 port 38392 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:30.670712 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:30.677585 systemd-logind[1697]: New session 12 of user core. Nov 8 00:27:30.683899 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:27:31.266932 sshd[5976]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:31.271520 systemd[1]: sshd@9-10.200.8.42:22-10.200.16.10:38392.service: Deactivated successfully. Nov 8 00:27:31.276351 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:27:31.277637 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:27:31.280138 systemd-logind[1697]: Removed session 12. Nov 8 00:27:31.384946 systemd[1]: Started sshd@10-10.200.8.42:22-10.200.16.10:38396.service - OpenSSH per-connection server daemon (10.200.16.10:38396). Nov 8 00:27:32.015618 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 38396 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:32.018335 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:32.024788 systemd-logind[1697]: New session 13 of user core. Nov 8 00:27:32.031897 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:27:32.598069 sshd[5992]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:32.602971 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:27:32.603744 systemd[1]: sshd@10-10.200.8.42:22-10.200.16.10:38396.service: Deactivated successfully. Nov 8 00:27:32.608581 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:27:32.613448 systemd-logind[1697]: Removed session 13. Nov 8 00:27:32.715048 systemd[1]: Started sshd@11-10.200.8.42:22-10.200.16.10:38410.service - OpenSSH per-connection server daemon (10.200.16.10:38410). Nov 8 00:27:33.353957 sshd[6003]: Accepted publickey for core from 10.200.16.10 port 38410 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:33.357557 sshd[6003]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:33.362817 systemd-logind[1697]: New session 14 of user core. Nov 8 00:27:33.365905 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:27:33.879620 sshd[6003]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:33.883326 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:27:33.884227 systemd[1]: sshd@11-10.200.8.42:22-10.200.16.10:38410.service: Deactivated successfully. Nov 8 00:27:33.888307 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:27:33.892164 systemd-logind[1697]: Removed session 14. Nov 8 00:27:34.131371 kubelet[3231]: E1108 00:27:34.131215 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:27:37.133107 kubelet[3231]: E1108 00:27:37.133028 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:27:39.008118 systemd[1]: Started sshd@12-10.200.8.42:22-10.200.16.10:38420.service - OpenSSH per-connection server daemon (10.200.16.10:38420). Nov 8 00:27:39.645566 sshd[6019]: Accepted publickey for core from 10.200.16.10 port 38420 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:39.647094 sshd[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:39.651044 systemd-logind[1697]: New session 15 of user core. Nov 8 00:27:39.654893 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:27:40.131332 containerd[1730]: time="2025-11-08T00:27:40.131222339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 8 00:27:40.176113 sshd[6019]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:40.182311 systemd[1]: sshd@12-10.200.8.42:22-10.200.16.10:38420.service: Deactivated successfully. Nov 8 00:27:40.185968 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:27:40.187699 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:27:40.189218 systemd-logind[1697]: Removed session 15. Nov 8 00:27:40.367670 containerd[1730]: time="2025-11-08T00:27:40.367483016Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:40.371592 containerd[1730]: time="2025-11-08T00:27:40.371369767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 8 00:27:40.371592 containerd[1730]: time="2025-11-08T00:27:40.371468568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:40.372104 kubelet[3231]: E1108 00:27:40.371892 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:40.372104 kubelet[3231]: E1108 00:27:40.371947 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 8 00:27:40.373807 kubelet[3231]: E1108 00:27:40.372083 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-sn8mq_calico-system(fc5e470f-14b3-444c-ac8a-3efb084d3809): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:40.373807 kubelet[3231]: E1108 00:27:40.373019 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:27:41.132742 kubelet[3231]: E1108 00:27:41.132307 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:27:41.133332 kubelet[3231]: E1108 00:27:41.133097 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:27:42.142415 kubelet[3231]: E1108 00:27:42.141976 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:27:45.133938 containerd[1730]: time="2025-11-08T00:27:45.133647535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 8 00:27:45.294044 systemd[1]: Started sshd@13-10.200.8.42:22-10.200.16.10:44272.service - OpenSSH per-connection server daemon (10.200.16.10:44272). Nov 8 00:27:45.379067 containerd[1730]: time="2025-11-08T00:27:45.379006028Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:45.383630 containerd[1730]: time="2025-11-08T00:27:45.383558880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 8 00:27:45.385405 containerd[1730]: time="2025-11-08T00:27:45.383606481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 8 00:27:45.385486 kubelet[3231]: E1108 00:27:45.383914 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:45.385486 kubelet[3231]: E1108 00:27:45.383972 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 8 00:27:45.385486 kubelet[3231]: E1108 00:27:45.384080 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:45.386571 containerd[1730]: time="2025-11-08T00:27:45.386535214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 8 00:27:45.630849 containerd[1730]: time="2025-11-08T00:27:45.630802295Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:45.633864 containerd[1730]: time="2025-11-08T00:27:45.633795929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 8 00:27:45.633986 containerd[1730]: time="2025-11-08T00:27:45.633894330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:45.634141 kubelet[3231]: E1108 00:27:45.634100 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:45.634219 kubelet[3231]: E1108 00:27:45.634152 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 8 00:27:45.634306 kubelet[3231]: E1108 00:27:45.634283 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7747c7fc7-whbjm_calico-system(5b3e910f-ad36-41f9-8d09-74f4b684ee03): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:45.635330 kubelet[3231]: E1108 00:27:45.635268 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:27:45.938749 sshd[6044]: Accepted publickey for core from 10.200.16.10 port 44272 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:45.940990 sshd[6044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:45.954092 systemd-logind[1697]: New session 16 of user core. Nov 8 00:27:45.959236 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:27:46.452817 sshd[6044]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:46.457178 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:27:46.457894 systemd[1]: sshd@13-10.200.8.42:22-10.200.16.10:44272.service: Deactivated successfully. Nov 8 00:27:46.461200 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:27:46.465530 systemd-logind[1697]: Removed session 16. Nov 8 00:27:48.130114 containerd[1730]: time="2025-11-08T00:27:48.129971251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 8 00:27:48.387892 containerd[1730]: time="2025-11-08T00:27:48.387539355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:48.390673 containerd[1730]: time="2025-11-08T00:27:48.390494997Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 8 00:27:48.390673 containerd[1730]: time="2025-11-08T00:27:48.390624299Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 8 00:27:48.391070 kubelet[3231]: E1108 00:27:48.391020 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:48.391472 kubelet[3231]: E1108 00:27:48.391081 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 8 00:27:48.391472 kubelet[3231]: E1108 00:27:48.391181 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-598d7bd9d8-kgsh2_calico-system(928ddcfe-b055-4feb-bfb6-23dedc6fa744): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:48.391472 kubelet[3231]: E1108 00:27:48.391232 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:27:51.577065 systemd[1]: Started sshd@14-10.200.8.42:22-10.200.16.10:42096.service - OpenSSH per-connection server daemon (10.200.16.10:42096). Nov 8 00:27:52.231478 sshd[6086]: Accepted publickey for core from 10.200.16.10 port 42096 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:52.233798 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:52.240697 systemd-logind[1697]: New session 17 of user core. Nov 8 00:27:52.246889 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:27:52.787991 sshd[6086]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:52.791059 systemd[1]: sshd@14-10.200.8.42:22-10.200.16.10:42096.service: Deactivated successfully. Nov 8 00:27:52.793301 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:27:52.795089 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:27:52.796372 systemd-logind[1697]: Removed session 17. Nov 8 00:27:53.132102 kubelet[3231]: E1108 00:27:53.132049 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:27:54.132135 containerd[1730]: time="2025-11-08T00:27:54.131745106Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:54.378521 containerd[1730]: time="2025-11-08T00:27:54.378458911Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:54.383660 containerd[1730]: time="2025-11-08T00:27:54.383528570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:54.383804 containerd[1730]: time="2025-11-08T00:27:54.383641772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:54.384087 kubelet[3231]: E1108 00:27:54.383882 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:54.384087 kubelet[3231]: E1108 00:27:54.383941 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:54.384087 kubelet[3231]: E1108 00:27:54.384035 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-6grzk_calico-apiserver(0ce899e0-d12a-4abb-b40d-26c4cc149868): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:54.384556 kubelet[3231]: E1108 00:27:54.384079 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:27:56.131529 containerd[1730]: time="2025-11-08T00:27:56.131486651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 8 00:27:56.377613 containerd[1730]: time="2025-11-08T00:27:56.377560592Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:56.380681 containerd[1730]: time="2025-11-08T00:27:56.380586634Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 8 00:27:56.380681 containerd[1730]: time="2025-11-08T00:27:56.380631435Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 8 00:27:56.380924 kubelet[3231]: E1108 00:27:56.380882 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:56.382900 kubelet[3231]: E1108 00:27:56.380961 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 8 00:27:56.382900 kubelet[3231]: E1108 00:27:56.381653 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:56.383010 containerd[1730]: time="2025-11-08T00:27:56.381504047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 8 00:27:56.625755 containerd[1730]: time="2025-11-08T00:27:56.625690221Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:56.629140 containerd[1730]: time="2025-11-08T00:27:56.628987666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 8 00:27:56.629140 containerd[1730]: time="2025-11-08T00:27:56.629088068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 8 00:27:56.631774 kubelet[3231]: E1108 00:27:56.629467 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:56.631774 kubelet[3231]: E1108 00:27:56.629518 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 8 00:27:56.631774 kubelet[3231]: E1108 00:27:56.629736 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-74448999c6-ltcnv_calico-apiserver(136d7667-9127-4dfa-b5ce-1dde786b7211): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:56.631774 kubelet[3231]: E1108 00:27:56.629778 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:27:56.632117 containerd[1730]: time="2025-11-08T00:27:56.630396986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 8 00:27:56.879510 containerd[1730]: time="2025-11-08T00:27:56.879454627Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 8 00:27:56.883744 containerd[1730]: time="2025-11-08T00:27:56.882135264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 8 00:27:56.883744 containerd[1730]: time="2025-11-08T00:27:56.882232365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 8 00:27:56.883937 kubelet[3231]: E1108 00:27:56.882426 3231 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:56.883937 kubelet[3231]: E1108 00:27:56.882475 3231 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 8 00:27:56.883937 kubelet[3231]: E1108 00:27:56.882571 3231 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-kbfws_calico-system(77eae253-1bce-4de5-8e9b-23a9c58b4ee0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 8 00:27:56.884135 kubelet[3231]: E1108 00:27:56.882624 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:27:57.133520 kubelet[3231]: E1108 00:27:57.133337 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:27:57.906091 systemd[1]: Started sshd@15-10.200.8.42:22-10.200.16.10:42108.service - OpenSSH per-connection server daemon (10.200.16.10:42108). Nov 8 00:27:58.539436 sshd[6113]: Accepted publickey for core from 10.200.16.10 port 42108 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:58.541918 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:58.548611 systemd-logind[1697]: New session 18 of user core. Nov 8 00:27:58.554898 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:27:59.131460 sshd[6113]: pam_unix(sshd:session): session closed for user core Nov 8 00:27:59.138435 systemd[1]: sshd@15-10.200.8.42:22-10.200.16.10:42108.service: Deactivated successfully. Nov 8 00:27:59.138949 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:27:59.142203 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:27:59.143613 systemd-logind[1697]: Removed session 18. Nov 8 00:27:59.247895 systemd[1]: Started sshd@16-10.200.8.42:22-10.200.16.10:42120.service - OpenSSH per-connection server daemon (10.200.16.10:42120). Nov 8 00:27:59.882649 sshd[6126]: Accepted publickey for core from 10.200.16.10 port 42120 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:27:59.884936 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:27:59.891474 systemd-logind[1697]: New session 19 of user core. Nov 8 00:27:59.897102 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:28:00.507435 sshd[6126]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:00.510452 systemd[1]: sshd@16-10.200.8.42:22-10.200.16.10:42120.service: Deactivated successfully. Nov 8 00:28:00.512930 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:28:00.514458 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:28:00.515983 systemd-logind[1697]: Removed session 19. Nov 8 00:28:00.622046 systemd[1]: Started sshd@17-10.200.8.42:22-10.200.16.10:40292.service - OpenSSH per-connection server daemon (10.200.16.10:40292). Nov 8 00:28:01.260753 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 40292 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:01.264318 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:01.272917 systemd-logind[1697]: New session 20 of user core. Nov 8 00:28:01.276901 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:28:02.131786 kubelet[3231]: E1108 00:28:02.131682 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:28:02.516654 sshd[6137]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:02.521527 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:28:02.522464 systemd[1]: sshd@17-10.200.8.42:22-10.200.16.10:40292.service: Deactivated successfully. Nov 8 00:28:02.525953 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:28:02.527990 systemd-logind[1697]: Removed session 20. Nov 8 00:28:02.636150 systemd[1]: Started sshd@18-10.200.8.42:22-10.200.16.10:40298.service - OpenSSH per-connection server daemon (10.200.16.10:40298). Nov 8 00:28:03.270953 sshd[6155]: Accepted publickey for core from 10.200.16.10 port 40298 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:03.273552 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:03.279137 systemd-logind[1697]: New session 21 of user core. Nov 8 00:28:03.284907 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:28:03.959904 sshd[6155]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:03.964986 systemd[1]: sshd@18-10.200.8.42:22-10.200.16.10:40298.service: Deactivated successfully. Nov 8 00:28:03.965355 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:28:03.968360 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:28:03.971691 systemd-logind[1697]: Removed session 21. Nov 8 00:28:04.079309 systemd[1]: Started sshd@19-10.200.8.42:22-10.200.16.10:40304.service - OpenSSH per-connection server daemon (10.200.16.10:40304). Nov 8 00:28:04.130535 kubelet[3231]: E1108 00:28:04.130186 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:28:04.709086 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 40304 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:04.712262 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:04.719617 systemd-logind[1697]: New session 22 of user core. Nov 8 00:28:04.723890 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:28:05.264774 sshd[6168]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:05.269916 systemd[1]: sshd@19-10.200.8.42:22-10.200.16.10:40304.service: Deactivated successfully. Nov 8 00:28:05.275186 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:28:05.279896 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:28:05.281970 systemd-logind[1697]: Removed session 22. Nov 8 00:28:07.133769 kubelet[3231]: E1108 00:28:07.132165 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:28:08.132149 kubelet[3231]: E1108 00:28:08.132066 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:28:10.134286 kubelet[3231]: E1108 00:28:10.134229 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:28:10.379848 systemd[1]: Started sshd@20-10.200.8.42:22-10.200.16.10:60938.service - OpenSSH per-connection server daemon (10.200.16.10:60938). Nov 8 00:28:11.020753 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 60938 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:11.021381 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:11.026689 systemd-logind[1697]: New session 23 of user core. Nov 8 00:28:11.034102 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:28:11.136009 kubelet[3231]: E1108 00:28:11.135948 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:28:11.566968 sshd[6185]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:11.573280 systemd[1]: sshd@20-10.200.8.42:22-10.200.16.10:60938.service: Deactivated successfully. Nov 8 00:28:11.577438 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:28:11.580019 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:28:11.581389 systemd-logind[1697]: Removed session 23. Nov 8 00:28:14.131174 kubelet[3231]: E1108 00:28:14.130759 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:28:16.676905 systemd[1]: Started sshd@21-10.200.8.42:22-10.200.16.10:60948.service - OpenSSH per-connection server daemon (10.200.16.10:60948). Nov 8 00:28:17.307829 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 60948 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:17.308808 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:17.315197 systemd-logind[1697]: New session 24 of user core. Nov 8 00:28:17.322375 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:28:17.388197 systemd[1]: run-containerd-runc-k8s.io-d002ffc835705b323fa4eceed52c48a1d0a9167537104fbe59ca3caf5760f94f-runc.rskA8P.mount: Deactivated successfully. Nov 8 00:28:17.894489 sshd[6197]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:17.900365 systemd[1]: sshd@21-10.200.8.42:22-10.200.16.10:60948.service: Deactivated successfully. Nov 8 00:28:17.904086 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:28:17.907052 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:28:17.909139 systemd-logind[1697]: Removed session 24. Nov 8 00:28:18.131645 kubelet[3231]: E1108 00:28:18.131490 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809" Nov 8 00:28:21.133472 kubelet[3231]: E1108 00:28:21.132804 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:28:22.131126 kubelet[3231]: E1108 00:28:22.131068 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-6grzk" podUID="0ce899e0-d12a-4abb-b40d-26c4cc149868" Nov 8 00:28:23.013839 systemd[1]: Started sshd@22-10.200.8.42:22-10.200.16.10:53612.service - OpenSSH per-connection server daemon (10.200.16.10:53612). Nov 8 00:28:23.647227 sshd[6232]: Accepted publickey for core from 10.200.16.10 port 53612 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:23.648812 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:23.652924 systemd-logind[1697]: New session 25 of user core. Nov 8 00:28:23.658937 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:28:24.189395 sshd[6232]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:24.193473 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:28:24.194764 systemd[1]: sshd@22-10.200.8.42:22-10.200.16.10:53612.service: Deactivated successfully. Nov 8 00:28:24.199657 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:28:24.202975 systemd-logind[1697]: Removed session 25. Nov 8 00:28:25.132436 kubelet[3231]: E1108 00:28:25.132381 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-598d7bd9d8-kgsh2" podUID="928ddcfe-b055-4feb-bfb6-23dedc6fa744" Nov 8 00:28:25.138661 kubelet[3231]: E1108 00:28:25.138501 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-kbfws" podUID="77eae253-1bce-4de5-8e9b-23a9c58b4ee0" Nov 8 00:28:25.138858 kubelet[3231]: E1108 00:28:25.138700 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7747c7fc7-whbjm" podUID="5b3e910f-ad36-41f9-8d09-74f4b684ee03" Nov 8 00:28:29.306101 systemd[1]: Started sshd@23-10.200.8.42:22-10.200.16.10:53626.service - OpenSSH per-connection server daemon (10.200.16.10:53626). Nov 8 00:28:29.949825 sshd[6245]: Accepted publickey for core from 10.200.16.10 port 53626 ssh2: RSA SHA256:yxpeZlueYXighPWk9NsCYeh/Jv55qT9g1dhKmSZqCCY Nov 8 00:28:29.951336 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:28:29.955438 systemd-logind[1697]: New session 26 of user core. Nov 8 00:28:29.962869 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:28:30.496856 sshd[6245]: pam_unix(sshd:session): session closed for user core Nov 8 00:28:30.500106 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:28:30.500962 systemd[1]: sshd@23-10.200.8.42:22-10.200.16.10:53626.service: Deactivated successfully. Nov 8 00:28:30.505798 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:28:30.511297 systemd-logind[1697]: Removed session 26. Nov 8 00:28:32.133755 kubelet[3231]: E1108 00:28:32.131850 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-74448999c6-ltcnv" podUID="136d7667-9127-4dfa-b5ce-1dde786b7211" Nov 8 00:28:32.133755 kubelet[3231]: E1108 00:28:32.133429 3231 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-sn8mq" podUID="fc5e470f-14b3-444c-ac8a-3efb084d3809"