Nov 1 00:20:51.093731 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:51.093758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.093768 kernel: BIOS-provided physical RAM map: Nov 1 00:20:51.093774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:20:51.093781 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 1 00:20:51.093788 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 1 00:20:51.093796 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 1 00:20:51.093806 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 1 00:20:51.093814 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 1 00:20:51.093820 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 1 00:20:51.093826 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 1 00:20:51.093835 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 1 00:20:51.093841 kernel: printk: bootconsole [earlyser0] enabled Nov 1 00:20:51.093847 kernel: NX (Execute Disable) protection: active Nov 1 00:20:51.093860 kernel: APIC: Static calls initialized Nov 1 00:20:51.093867 kernel: efi: EFI v2.7 by Microsoft Nov 1 00:20:51.093875 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Nov 1 00:20:51.093884 kernel: SMBIOS 3.1.0 present. Nov 1 00:20:51.093892 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 1 00:20:51.093899 kernel: Hypervisor detected: Microsoft Hyper-V Nov 1 00:20:51.093909 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 1 00:20:51.093916 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 1 00:20:51.093923 kernel: Hyper-V: Nested features: 0x1e0101 Nov 1 00:20:51.093933 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 1 00:20:51.093942 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 1 00:20:51.093949 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:20:51.093960 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:20:51.093967 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 1 00:20:51.093975 kernel: tsc: Detected 2593.905 MHz processor Nov 1 00:20:51.093985 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:51.093992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:51.093999 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 1 00:20:51.094006 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:20:51.094019 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:51.094026 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 1 00:20:51.094033 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 1 00:20:51.094043 kernel: Using GB pages for direct mapping Nov 1 00:20:51.094050 kernel: Secure boot disabled Nov 1 00:20:51.094058 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:51.094067 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 1 00:20:51.094079 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094090 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094098 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 1 00:20:51.094108 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 1 00:20:51.094118 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094127 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094138 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094151 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094159 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094168 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094178 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094185 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 1 00:20:51.094196 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 1 00:20:51.094204 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 1 00:20:51.094212 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 1 00:20:51.094224 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 1 00:20:51.094231 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 1 00:20:51.094242 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 1 00:20:51.094250 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 1 00:20:51.094259 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 1 00:20:51.094268 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 1 00:20:51.094275 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:20:51.094285 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:20:51.094293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 1 00:20:51.094306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 1 00:20:51.094313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 1 00:20:51.094322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 1 00:20:51.094332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 1 00:20:51.094339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 1 00:20:51.094350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 1 00:20:51.094358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 1 00:20:51.094368 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 1 00:20:51.094380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 1 00:20:51.094392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 1 00:20:51.094403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 1 00:20:51.094412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 1 00:20:51.094423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 1 00:20:51.094432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 1 00:20:51.094440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 1 00:20:51.094451 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 1 00:20:51.094461 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 1 00:20:51.094469 kernel: Zone ranges: Nov 1 00:20:51.094482 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:51.094491 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:20:51.094499 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:20:51.094509 kernel: Movable zone start for each node Nov 1 00:20:51.094519 kernel: Early memory node ranges Nov 1 00:20:51.094528 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:20:51.094538 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 1 00:20:51.094548 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 1 00:20:51.094558 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:20:51.094572 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 1 00:20:51.094583 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:51.094594 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:20:51.094604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 1 00:20:51.094616 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 1 00:20:51.094628 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 1 00:20:51.094641 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:51.094653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:51.094665 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:51.094700 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 1 00:20:51.094714 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:20:51.094726 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 1 00:20:51.094741 kernel: Booting paravirtualized kernel on Hyper-V Nov 1 00:20:51.094756 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:51.094770 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:20:51.094785 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:20:51.094799 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:20:51.094813 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:20:51.094830 kernel: Hyper-V: PV spinlocks enabled Nov 1 00:20:51.094844 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:20:51.094860 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.094875 kernel: random: crng init done Nov 1 00:20:51.094888 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:20:51.094901 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:20:51.094915 kernel: Fallback order for Node 0: 0 Nov 1 00:20:51.094929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 1 00:20:51.094947 kernel: Policy zone: Normal Nov 1 00:20:51.094972 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:51.094987 kernel: software IO TLB: area num 2. Nov 1 00:20:51.095004 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 310128K reserved, 0K cma-reserved) Nov 1 00:20:51.095020 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:20:51.095035 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:51.095050 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:51.095065 kernel: Dynamic Preempt: voluntary Nov 1 00:20:51.095080 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:51.095097 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:51.095115 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:20:51.095131 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:51.095146 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:51.095161 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:51.095176 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:51.095191 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:20:51.095209 kernel: Using NULL legacy PIC Nov 1 00:20:51.095224 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 1 00:20:51.095240 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:51.095255 kernel: Console: colour dummy device 80x25 Nov 1 00:20:51.095269 kernel: printk: console [tty1] enabled Nov 1 00:20:51.095285 kernel: printk: console [ttyS0] enabled Nov 1 00:20:51.095300 kernel: printk: bootconsole [earlyser0] disabled Nov 1 00:20:51.095315 kernel: ACPI: Core revision 20230628 Nov 1 00:20:51.095330 kernel: Failed to register legacy timer interrupt Nov 1 00:20:51.095345 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:51.095363 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 1 00:20:51.095378 kernel: Hyper-V: Using IPI hypercalls Nov 1 00:20:51.095393 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 1 00:20:51.095408 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 1 00:20:51.095423 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 1 00:20:51.095437 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 1 00:20:51.095450 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 1 00:20:51.095463 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 1 00:20:51.095477 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 1 00:20:51.095493 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:20:51.095505 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:20:51.095524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:51.095541 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:20:51.095557 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:51.095573 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:20:51.095589 kernel: RETBleed: Vulnerable Nov 1 00:20:51.095602 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:20:51.095616 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:51.095630 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:51.095648 kernel: active return thunk: its_return_thunk Nov 1 00:20:51.095662 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:20:51.095699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:51.095714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:51.095729 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:51.095743 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:20:51.095758 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:20:51.095772 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:20:51.095787 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:51.095801 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 00:20:51.095816 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 00:20:51.095834 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 00:20:51.095848 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 1 00:20:51.095863 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:51.095877 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:51.095892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:51.095906 kernel: landlock: Up and running. Nov 1 00:20:51.095920 kernel: SELinux: Initializing. Nov 1 00:20:51.095935 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.095950 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.095965 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:20:51.095979 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.095997 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.096011 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.096026 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:20:51.096041 kernel: signal: max sigframe size: 3632 Nov 1 00:20:51.096055 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:51.096070 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:51.096085 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:20:51.096099 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:51.096113 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:51.096131 kernel: .... node #0, CPUs: #1 Nov 1 00:20:51.096146 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 1 00:20:51.096162 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:20:51.096176 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:20:51.096191 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:51.096206 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 1 00:20:51.096220 kernel: devtmpfs: initialized Nov 1 00:20:51.096235 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:51.096252 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 1 00:20:51.096267 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:51.096282 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:20:51.096297 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:51.096311 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:51.096326 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:51.096340 kernel: audit: type=2000 audit(1761956449.028:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:51.096355 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:51.096369 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:51.096387 kernel: cpuidle: using governor menu Nov 1 00:20:51.096401 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:51.096415 kernel: dca service started, version 1.12.1 Nov 1 00:20:51.096430 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 1 00:20:51.096445 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:51.096459 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:51.096474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:51.096488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:51.096503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:51.096521 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:51.096536 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:51.096550 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:51.096565 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:20:51.096580 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:51.096594 kernel: ACPI: Interpreter enabled Nov 1 00:20:51.096609 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:20:51.096624 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:51.096638 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:51.096656 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:20:51.096686 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 1 00:20:51.096701 kernel: iommu: Default domain type: Translated Nov 1 00:20:51.096716 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:51.096730 kernel: efivars: Registered efivars operations Nov 1 00:20:51.096745 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:51.096759 kernel: PCI: System does not support PCI Nov 1 00:20:51.096774 kernel: vgaarb: loaded Nov 1 00:20:51.096788 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 1 00:20:51.096806 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:51.096820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:51.096835 kernel: pnp: PnP ACPI init Nov 1 00:20:51.096849 kernel: pnp: PnP ACPI: found 3 devices Nov 1 00:20:51.096864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:51.096879 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:51.096893 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:20:51.096908 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:20:51.096923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:51.096940 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:20:51.096955 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:20:51.096969 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:20:51.096984 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.096998 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.097013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:51.097027 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:51.097042 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:51.097056 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:20:51.097074 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Nov 1 00:20:51.097089 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:20:51.097103 kernel: Initialise system trusted keyrings Nov 1 00:20:51.097118 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:20:51.097132 kernel: Key type asymmetric registered Nov 1 00:20:51.097146 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:51.097161 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:51.097175 kernel: io scheduler mq-deadline registered Nov 1 00:20:51.097190 kernel: io scheduler kyber registered Nov 1 00:20:51.097207 kernel: io scheduler bfq registered Nov 1 00:20:51.097222 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:51.097236 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:51.097251 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:51.097266 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:20:51.097280 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:20:51.097462 kernel: rtc_cmos 00:02: registered as rtc0 Nov 1 00:20:51.097592 kernel: rtc_cmos 00:02: setting system clock to 2025-11-01T00:20:50 UTC (1761956450) Nov 1 00:20:51.098435 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 1 00:20:51.098460 kernel: intel_pstate: CPU model not supported Nov 1 00:20:51.098476 kernel: efifb: probing for efifb Nov 1 00:20:51.098490 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:20:51.098506 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:20:51.098521 kernel: efifb: scrolling: redraw Nov 1 00:20:51.098535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:20:51.098550 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:20:51.098565 kernel: fb0: EFI VGA frame buffer device Nov 1 00:20:51.098584 kernel: pstore: Using crash dump compression: deflate Nov 1 00:20:51.098599 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:20:51.098614 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:51.098628 kernel: Segment Routing with IPv6 Nov 1 00:20:51.098643 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:51.098658 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:51.099099 kernel: Key type dns_resolver registered Nov 1 00:20:51.099117 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:51.099130 kernel: sched_clock: Marking stable (874037300, 47887700)->(1137975900, -216050900) Nov 1 00:20:51.099149 kernel: registered taskstats version 1 Nov 1 00:20:51.099163 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:51.099183 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:51.099199 kernel: Key type .fscrypt registered Nov 1 00:20:51.099213 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:51.099225 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:20:51.099238 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:51.099251 kernel: ima: No architecture policies found Nov 1 00:20:51.099263 kernel: clk: Disabling unused clocks Nov 1 00:20:51.099280 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:51.099297 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:51.099310 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:51.099324 kernel: Run /init as init process Nov 1 00:20:51.099338 kernel: with arguments: Nov 1 00:20:51.099353 kernel: /init Nov 1 00:20:51.099367 kernel: with environment: Nov 1 00:20:51.099382 kernel: HOME=/ Nov 1 00:20:51.099398 kernel: TERM=linux Nov 1 00:20:51.099421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:51.099442 systemd[1]: Detected virtualization microsoft. Nov 1 00:20:51.099459 systemd[1]: Detected architecture x86-64. Nov 1 00:20:51.099476 systemd[1]: Running in initrd. Nov 1 00:20:51.099491 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:51.099507 systemd[1]: Hostname set to . Nov 1 00:20:51.099522 systemd[1]: Initializing machine ID from random generator. Nov 1 00:20:51.099541 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:51.099557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:51.099571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:51.099585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:51.099600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:51.099616 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:51.099631 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:51.099650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:51.099664 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:51.101707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:51.101719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:51.101732 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:51.101743 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:51.101756 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:51.101765 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:51.101777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:51.101789 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:51.101798 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:51.101809 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:51.101819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:51.101828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:51.101839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:51.101848 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:51.101860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:51.101872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:51.101883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:51.101892 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:51.101903 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:51.101912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:51.101924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:51.101958 systemd-journald[176]: Collecting audit messages is disabled. Nov 1 00:20:51.101986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:51.101997 systemd-journald[176]: Journal started Nov 1 00:20:51.102019 systemd-journald[176]: Runtime Journal (/run/log/journal/9cd4db197e9a48a4883ee88c92a6caea) is 8.0M, max 158.8M, 150.8M free. Nov 1 00:20:51.116678 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:51.116866 systemd-modules-load[177]: Inserted module 'overlay' Nov 1 00:20:51.119506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:51.123257 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:51.128981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:51.146886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:51.160891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:51.165358 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:51.179635 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 1 00:20:51.179921 kernel: Bridge firewalling registered Nov 1 00:20:51.182179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:51.185191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:51.185678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:51.188305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:51.215701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:51.224269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:51.232215 dracut-cmdline[201]: dracut-dracut-053 Nov 1 00:20:51.232215 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.250118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:51.265006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:51.282526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:51.291934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:51.299007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:51.333694 kernel: SCSI subsystem initialized Nov 1 00:20:51.338478 systemd-resolved[277]: Positive Trust Anchors: Nov 1 00:20:51.340867 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:51.340925 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:51.365863 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:51.348299 systemd-resolved[277]: Defaulting to hostname 'linux'. Nov 1 00:20:51.368691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:51.374890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:51.387694 kernel: iscsi: registered transport (tcp) Nov 1 00:20:51.408536 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:51.408614 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:51.444343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:51.455845 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:51.488206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:51.488290 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:51.491799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:51.532699 kernel: raid6: avx512x4 gen() 18427 MB/s Nov 1 00:20:51.551691 kernel: raid6: avx512x2 gen() 18372 MB/s Nov 1 00:20:51.570681 kernel: raid6: avx512x1 gen() 18399 MB/s Nov 1 00:20:51.589681 kernel: raid6: avx2x4 gen() 18238 MB/s Nov 1 00:20:51.608681 kernel: raid6: avx2x2 gen() 18270 MB/s Nov 1 00:20:51.628486 kernel: raid6: avx2x1 gen() 14163 MB/s Nov 1 00:20:51.628527 kernel: raid6: using algorithm avx512x4 gen() 18427 MB/s Nov 1 00:20:51.652317 kernel: raid6: .... xor() 8315 MB/s, rmw enabled Nov 1 00:20:51.652340 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:20:51.675694 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:51.826702 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:51.836423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:51.846845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:51.860587 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:20:51.865115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:51.881277 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:51.893947 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Nov 1 00:20:51.920932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:51.932888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:51.977251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:52.003135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:52.026017 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:52.033371 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:52.040685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:52.047616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:52.059891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:52.077694 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:52.080000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:52.099146 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:52.099376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:52.122871 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:52.122904 kernel: hv_vmbus: Vmbus version:5.2 Nov 1 00:20:52.103521 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:52.106537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:52.106803 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:52.110139 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:52.125128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:52.753691 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:20:52.753714 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 1 00:20:52.753729 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:52.753740 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:20:52.753750 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:20:52.753764 kernel: PTP clock support registered Nov 1 00:20:52.753774 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:20:52.753788 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:20:52.753801 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:20:52.753811 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:20:52.753824 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:20:52.753835 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:20:52.728157 systemd-resolved[277]: Clock change detected. Flushing caches. Nov 1 00:20:52.773777 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:20:52.776607 kernel: scsi host1: storvsc_host_t Nov 1 00:20:52.780551 kernel: scsi host0: storvsc_host_t Nov 1 00:20:52.785567 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:20:52.788552 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:20:52.790739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:52.796782 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:20:52.808938 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:52.821556 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:20:52.834010 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 1 00:20:52.840996 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:20:52.850550 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:20:52.851053 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:20:52.854557 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:20:52.857934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:52.874022 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:20:52.874363 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:20:52.876735 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:20:52.881035 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:20:52.881333 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:20:52.887558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:52.891573 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:20:52.995565 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: VF slot 1 added Nov 1 00:20:53.004426 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:20:53.004517 kernel: hv_pci 506f2f26-c35d-4f20-b46e-a7891081c746: PCI VMBus probing: Using version 0x10004 Nov 1 00:20:53.253722 kernel: hv_pci 506f2f26-c35d-4f20-b46e-a7891081c746: PCI host bridge to bus c35d:00 Nov 1 00:20:53.254136 kernel: pci_bus c35d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 1 00:20:53.257574 kernel: pci_bus c35d:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:20:53.262631 kernel: pci c35d:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 1 00:20:53.267619 kernel: pci c35d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:20:53.271878 kernel: pci c35d:00:02.0: enabling Extended Tags Nov 1 00:20:53.283563 kernel: pci c35d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c35d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 1 00:20:53.289427 kernel: pci_bus c35d:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:20:53.289747 kernel: pci c35d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:20:53.464202 kernel: mlx5_core c35d:00:02.0: enabling device (0000 -> 0002) Nov 1 00:20:53.468555 kernel: mlx5_core c35d:00:02.0: firmware version: 14.30.5000 Nov 1 00:20:53.683218 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: VF registering: eth1 Nov 1 00:20:53.683605 kernel: mlx5_core c35d:00:02.0 eth1: joined to eth0 Nov 1 00:20:53.688640 kernel: mlx5_core c35d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:20:53.697595 kernel: mlx5_core c35d:00:02.0 enP50013s1: renamed from eth1 Nov 1 00:20:53.697797 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (439) Nov 1 00:20:53.737445 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 1 00:20:53.746177 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (456) Nov 1 00:20:53.747991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 1 00:20:53.763952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 1 00:20:53.774012 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 1 00:20:53.774134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 1 00:20:53.792727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:53.809558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:53.817556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:53.826549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:54.828625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:54.828692 disk-uuid[599]: The operation has completed successfully. Nov 1 00:20:54.917332 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:54.917460 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:54.947690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:54.954084 sh[712]: Success Nov 1 00:20:54.982583 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:20:55.351436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:55.365773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:55.370915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:55.412220 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:55.412307 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:55.416106 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:55.418886 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:55.421361 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:55.809689 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:55.815434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:55.824699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:55.835687 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:55.855139 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:55.855225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:55.857378 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:55.915393 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:55.925542 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:55.931735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:55.941552 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:55.951569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:55.964782 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:55.970960 systemd-networkd[888]: lo: Link UP Nov 1 00:20:55.970964 systemd-networkd[888]: lo: Gained carrier Nov 1 00:20:55.973730 systemd-networkd[888]: Enumeration completed Nov 1 00:20:55.973808 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:55.976128 systemd-networkd[888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:55.976132 systemd-networkd[888]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:55.977744 systemd[1]: Reached target network.target - Network. Nov 1 00:20:56.037561 kernel: mlx5_core c35d:00:02.0 enP50013s1: Link up Nov 1 00:20:56.067569 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: Data path switched to VF: enP50013s1 Nov 1 00:20:56.068574 systemd-networkd[888]: enP50013s1: Link UP Nov 1 00:20:56.068758 systemd-networkd[888]: eth0: Link UP Nov 1 00:20:56.068988 systemd-networkd[888]: eth0: Gained carrier Nov 1 00:20:56.069005 systemd-networkd[888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:56.083423 systemd-networkd[888]: enP50013s1: Gained carrier Nov 1 00:20:56.113607 systemd-networkd[888]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 1 00:20:56.864103 ignition[896]: Ignition 2.19.0 Nov 1 00:20:56.864114 ignition[896]: Stage: fetch-offline Nov 1 00:20:56.864160 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:56.868456 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:56.864171 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:56.864277 ignition[896]: parsed url from cmdline: "" Nov 1 00:20:56.864282 ignition[896]: no config URL provided Nov 1 00:20:56.864288 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:56.864298 ignition[896]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:56.864305 ignition[896]: failed to fetch config: resource requires networking Nov 1 00:20:56.866347 ignition[896]: Ignition finished successfully Nov 1 00:20:56.886719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:20:56.909055 ignition[904]: Ignition 2.19.0 Nov 1 00:20:56.909067 ignition[904]: Stage: fetch Nov 1 00:20:56.909282 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:56.909295 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:56.909396 ignition[904]: parsed url from cmdline: "" Nov 1 00:20:56.909399 ignition[904]: no config URL provided Nov 1 00:20:56.909404 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:56.909411 ignition[904]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:56.909437 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:20:56.999371 ignition[904]: GET result: OK Nov 1 00:20:56.999487 ignition[904]: config has been read from IMDS userdata Nov 1 00:20:56.999529 ignition[904]: parsing config with SHA512: d572d0fa8b1f7b96b9d754252b4676e9deffe761078187e03c9706b5b1b59cf65a3188510f2f859d2cc31177e57d1c6709b9ba523de76cbf9d1b516e7f8e3e3f Nov 1 00:20:57.006251 unknown[904]: fetched base config from "system" Nov 1 00:20:57.006744 ignition[904]: fetch: fetch complete Nov 1 00:20:57.006268 unknown[904]: fetched base config from "system" Nov 1 00:20:57.006750 ignition[904]: fetch: fetch passed Nov 1 00:20:57.006278 unknown[904]: fetched user config from "azure" Nov 1 00:20:57.006799 ignition[904]: Ignition finished successfully Nov 1 00:20:57.012507 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:20:57.029704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:57.050893 ignition[910]: Ignition 2.19.0 Nov 1 00:20:57.050904 ignition[910]: Stage: kargs Nov 1 00:20:57.053489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:57.051140 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:57.051154 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:57.052022 ignition[910]: kargs: kargs passed Nov 1 00:20:57.052077 ignition[910]: Ignition finished successfully Nov 1 00:20:57.080709 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:57.097993 ignition[916]: Ignition 2.19.0 Nov 1 00:20:57.098004 ignition[916]: Stage: disks Nov 1 00:20:57.101229 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:57.098224 ignition[916]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:57.105803 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:57.098240 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:57.108289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:57.099486 ignition[916]: disks: disks passed Nov 1 00:20:57.108707 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:57.099551 ignition[916]: Ignition finished successfully Nov 1 00:20:57.109309 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:57.112351 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:57.148828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:57.212591 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 1 00:20:57.218014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:57.228653 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:57.321586 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:57.322146 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:57.325092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:57.365645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:57.382552 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Nov 1 00:20:57.391607 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:57.391694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:57.395145 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:57.396714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:57.403315 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:20:57.412102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:57.418063 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:57.412146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:57.425742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:57.428322 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:57.437724 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:58.060840 systemd-networkd[888]: eth0: Gained IPv6LL Nov 1 00:20:58.145367 coreos-metadata[950]: Nov 01 00:20:58.145 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:20:58.149627 coreos-metadata[950]: Nov 01 00:20:58.147 INFO Fetch successful Nov 1 00:20:58.149627 coreos-metadata[950]: Nov 01 00:20:58.147 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:20:58.158610 coreos-metadata[950]: Nov 01 00:20:58.158 INFO Fetch successful Nov 1 00:20:58.161211 coreos-metadata[950]: Nov 01 00:20:58.161 INFO wrote hostname ci-4081.3.6-n-534d15dd10 to /sysroot/etc/hostname Nov 1 00:20:58.162905 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:20:58.429131 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:58.477914 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:58.497367 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:58.504495 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:59.544865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:59.555734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:59.562819 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:59.569969 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:59.573807 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:59.597714 ignition[1053]: INFO : Ignition 2.19.0 Nov 1 00:20:59.597714 ignition[1053]: INFO : Stage: mount Nov 1 00:20:59.605515 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:59.605515 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:59.605515 ignition[1053]: INFO : mount: mount passed Nov 1 00:20:59.605515 ignition[1053]: INFO : Ignition finished successfully Nov 1 00:20:59.601294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:59.620711 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:59.630103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:59.655563 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1061) Nov 1 00:20:59.659729 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:59.659808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:59.668303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:59.668322 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:59.677548 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:59.679353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:59.703433 ignition[1082]: INFO : Ignition 2.19.0 Nov 1 00:20:59.703433 ignition[1082]: INFO : Stage: files Nov 1 00:20:59.708069 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:59.708069 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:59.708069 ignition[1082]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:59.747079 ignition[1082]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:59.747079 ignition[1082]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:59.878646 ignition[1082]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:59.883109 ignition[1082]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:59.883109 ignition[1082]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:59.879146 unknown[1082]: wrote ssh authorized keys file for user: core Nov 1 00:20:59.909708 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:59.917449 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:59.995176 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:21:00.058449 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:21:00.063899 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:00.068601 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:00.068601 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:00.078104 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:00.082903 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:00.087766 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:00.092605 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:00.097275 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:00.115762 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:21:00.411619 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:21:00.731868 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.731868 ignition[1082]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:21:00.758933 ignition[1082]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: files passed Nov 1 00:21:00.764479 ignition[1082]: INFO : Ignition finished successfully Nov 1 00:21:00.761479 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:21:00.802383 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:21:00.806736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:21:00.812028 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:21:00.812571 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:21:00.836841 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.836841 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.845364 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.842480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:00.847331 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:21:00.866754 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:21:00.890383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:21:00.893019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:21:00.900008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:21:00.902990 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:21:00.907234 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:21:00.917791 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:21:00.932391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:00.943784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:21:00.956061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:00.962205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:00.965522 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:21:00.973573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:21:00.973721 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:00.979818 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:21:00.988420 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:21:00.990992 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:21:00.996031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:01.002091 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:01.008209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:21:01.014088 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:01.023324 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:21:01.029311 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:21:01.034850 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:21:01.039524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:21:01.039734 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:01.044813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:01.050232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:01.056018 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:21:01.058808 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:01.062384 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:21:01.062520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:01.079872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:21:01.080087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:01.089899 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:21:01.090079 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:21:01.095270 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:21:01.095375 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:01.117733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:21:01.123007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:21:01.123167 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:01.135438 ignition[1134]: INFO : Ignition 2.19.0 Nov 1 00:21:01.135438 ignition[1134]: INFO : Stage: umount Nov 1 00:21:01.135438 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:01.135438 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:21:01.135438 ignition[1134]: INFO : umount: umount passed Nov 1 00:21:01.135438 ignition[1134]: INFO : Ignition finished successfully Nov 1 00:21:01.149951 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:21:01.154738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:21:01.155054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:01.163837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:21:01.164041 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:01.174383 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:21:01.177038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:21:01.183223 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:21:01.183519 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:21:01.188956 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:21:01.189007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:21:01.194443 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:21:01.194495 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:21:01.199772 systemd[1]: Stopped target network.target - Network. Nov 1 00:21:01.202069 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:21:01.202114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:01.205410 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:21:01.208127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:21:01.212619 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:01.215654 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:21:01.216548 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:21:01.217023 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:21:01.217066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:01.217469 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:21:01.217502 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:01.218367 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:21:01.218414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:21:01.218832 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:21:01.218867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:01.219417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:21:01.219764 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:21:01.220875 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:21:01.221415 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:21:01.221514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:21:01.264162 systemd-networkd[888]: eth0: DHCPv6 lease lost Nov 1 00:21:01.265896 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:21:01.266004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:21:01.282477 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:21:01.282616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:21:01.290954 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:21:01.291002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:01.312732 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:21:01.319766 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:21:01.319841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:01.326181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:01.326243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:01.331614 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:21:01.331677 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:01.334901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:21:01.334958 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:01.338505 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:01.373145 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:21:01.373289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:01.383471 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:21:01.383970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:01.388836 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:21:01.388879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:01.391654 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:21:01.391698 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:01.394685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:21:01.430570 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: Data path switched from VF: enP50013s1 Nov 1 00:21:01.394731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:01.400213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:01.400269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:01.430164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:21:01.436458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:21:01.436550 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:01.442653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:01.442761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:01.466209 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:21:01.466353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:21:01.471949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:21:01.472032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:21:01.733285 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:21:01.733444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:21:01.741473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:21:01.744492 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:21:01.744565 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:01.762691 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:21:02.201270 systemd[1]: Switching root. Nov 1 00:21:02.295570 systemd-journald[176]: Journal stopped Nov 1 00:20:51.093731 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:41:55 -00 2025 Nov 1 00:20:51.093758 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.093768 kernel: BIOS-provided physical RAM map: Nov 1 00:20:51.093774 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:20:51.093781 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 1 00:20:51.093788 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 1 00:20:51.093796 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 1 00:20:51.093806 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 1 00:20:51.093814 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 1 00:20:51.093820 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 1 00:20:51.093826 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 1 00:20:51.093835 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 1 00:20:51.093841 kernel: printk: bootconsole [earlyser0] enabled Nov 1 00:20:51.093847 kernel: NX (Execute Disable) protection: active Nov 1 00:20:51.093860 kernel: APIC: Static calls initialized Nov 1 00:20:51.093867 kernel: efi: EFI v2.7 by Microsoft Nov 1 00:20:51.093875 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Nov 1 00:20:51.093884 kernel: SMBIOS 3.1.0 present. Nov 1 00:20:51.093892 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 1 00:20:51.093899 kernel: Hypervisor detected: Microsoft Hyper-V Nov 1 00:20:51.093909 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 1 00:20:51.093916 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Nov 1 00:20:51.093923 kernel: Hyper-V: Nested features: 0x1e0101 Nov 1 00:20:51.093933 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 1 00:20:51.093942 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 1 00:20:51.093949 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:20:51.093960 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 1 00:20:51.093967 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 1 00:20:51.093975 kernel: tsc: Detected 2593.905 MHz processor Nov 1 00:20:51.093985 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:20:51.093992 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:20:51.093999 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 1 00:20:51.094006 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:20:51.094019 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:20:51.094026 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 1 00:20:51.094033 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 1 00:20:51.094043 kernel: Using GB pages for direct mapping Nov 1 00:20:51.094050 kernel: Secure boot disabled Nov 1 00:20:51.094058 kernel: ACPI: Early table checksum verification disabled Nov 1 00:20:51.094067 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 1 00:20:51.094079 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094090 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094098 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 1 00:20:51.094108 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 1 00:20:51.094118 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094127 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094138 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094151 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094159 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094168 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094178 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 1 00:20:51.094185 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 1 00:20:51.094196 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 1 00:20:51.094204 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 1 00:20:51.094212 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 1 00:20:51.094224 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 1 00:20:51.094231 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 1 00:20:51.094242 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 1 00:20:51.094250 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 1 00:20:51.094259 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 1 00:20:51.094268 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 1 00:20:51.094275 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 1 00:20:51.094285 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 1 00:20:51.094293 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 1 00:20:51.094306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 1 00:20:51.094313 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 1 00:20:51.094322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 1 00:20:51.094332 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 1 00:20:51.094339 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 1 00:20:51.094350 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 1 00:20:51.094358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 1 00:20:51.094368 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 1 00:20:51.094380 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 1 00:20:51.094392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 1 00:20:51.094403 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 1 00:20:51.094412 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 1 00:20:51.094423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 1 00:20:51.094432 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 1 00:20:51.094440 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 1 00:20:51.094451 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 1 00:20:51.094461 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 1 00:20:51.094469 kernel: Zone ranges: Nov 1 00:20:51.094482 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:20:51.094491 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 1 00:20:51.094499 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:20:51.094509 kernel: Movable zone start for each node Nov 1 00:20:51.094519 kernel: Early memory node ranges Nov 1 00:20:51.094528 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:20:51.094538 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 1 00:20:51.094548 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 1 00:20:51.094558 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 1 00:20:51.094572 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 1 00:20:51.094583 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:20:51.094594 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:20:51.094604 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 1 00:20:51.094616 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 1 00:20:51.094628 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 1 00:20:51.094641 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:20:51.094653 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:20:51.094665 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:20:51.094700 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 1 00:20:51.094714 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 1 00:20:51.094726 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 1 00:20:51.094741 kernel: Booting paravirtualized kernel on Hyper-V Nov 1 00:20:51.094756 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:20:51.094770 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 1 00:20:51.094785 kernel: percpu: Embedded 58 pages/cpu s196712 r8192 d32664 u1048576 Nov 1 00:20:51.094799 kernel: pcpu-alloc: s196712 r8192 d32664 u1048576 alloc=1*2097152 Nov 1 00:20:51.094813 kernel: pcpu-alloc: [0] 0 1 Nov 1 00:20:51.094830 kernel: Hyper-V: PV spinlocks enabled Nov 1 00:20:51.094844 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:20:51.094860 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.094875 kernel: random: crng init done Nov 1 00:20:51.094888 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 1 00:20:51.094901 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:20:51.094915 kernel: Fallback order for Node 0: 0 Nov 1 00:20:51.094929 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 1 00:20:51.094947 kernel: Policy zone: Normal Nov 1 00:20:51.094972 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:20:51.094987 kernel: software IO TLB: area num 2. Nov 1 00:20:51.095004 kernel: Memory: 8077072K/8387460K available (12288K kernel code, 2288K rwdata, 22748K rodata, 42884K init, 2316K bss, 310128K reserved, 0K cma-reserved) Nov 1 00:20:51.095020 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 1 00:20:51.095035 kernel: ftrace: allocating 37980 entries in 149 pages Nov 1 00:20:51.095050 kernel: ftrace: allocated 149 pages with 4 groups Nov 1 00:20:51.095065 kernel: Dynamic Preempt: voluntary Nov 1 00:20:51.095080 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:20:51.095097 kernel: rcu: RCU event tracing is enabled. Nov 1 00:20:51.095115 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 1 00:20:51.095131 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:20:51.095146 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:20:51.095161 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:20:51.095176 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:20:51.095191 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 1 00:20:51.095209 kernel: Using NULL legacy PIC Nov 1 00:20:51.095224 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 1 00:20:51.095240 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:20:51.095255 kernel: Console: colour dummy device 80x25 Nov 1 00:20:51.095269 kernel: printk: console [tty1] enabled Nov 1 00:20:51.095285 kernel: printk: console [ttyS0] enabled Nov 1 00:20:51.095300 kernel: printk: bootconsole [earlyser0] disabled Nov 1 00:20:51.095315 kernel: ACPI: Core revision 20230628 Nov 1 00:20:51.095330 kernel: Failed to register legacy timer interrupt Nov 1 00:20:51.095345 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:20:51.095363 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 1 00:20:51.095378 kernel: Hyper-V: Using IPI hypercalls Nov 1 00:20:51.095393 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 1 00:20:51.095408 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 1 00:20:51.095423 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 1 00:20:51.095437 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 1 00:20:51.095450 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 1 00:20:51.095463 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 1 00:20:51.095477 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Nov 1 00:20:51.095493 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 1 00:20:51.095505 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Nov 1 00:20:51.095524 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:20:51.095541 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:20:51.095557 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:20:51.095573 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 1 00:20:51.095589 kernel: RETBleed: Vulnerable Nov 1 00:20:51.095602 kernel: Speculative Store Bypass: Vulnerable Nov 1 00:20:51.095616 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:51.095630 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 1 00:20:51.095648 kernel: active return thunk: its_return_thunk Nov 1 00:20:51.095662 kernel: ITS: Mitigation: Aligned branch/return thunks Nov 1 00:20:51.095699 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:20:51.095714 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:20:51.095729 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:20:51.095743 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 1 00:20:51.095758 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 1 00:20:51.095772 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 1 00:20:51.095787 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:20:51.095801 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 1 00:20:51.095816 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 1 00:20:51.095834 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 1 00:20:51.095848 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 1 00:20:51.095863 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:20:51.095877 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:20:51.095892 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 1 00:20:51.095906 kernel: landlock: Up and running. Nov 1 00:20:51.095920 kernel: SELinux: Initializing. Nov 1 00:20:51.095935 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.095950 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.095965 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 1 00:20:51.095979 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.095997 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.096011 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 1 00:20:51.096026 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 1 00:20:51.096041 kernel: signal: max sigframe size: 3632 Nov 1 00:20:51.096055 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:20:51.096070 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:20:51.096085 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 1 00:20:51.096099 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:20:51.096113 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:20:51.096131 kernel: .... node #0, CPUs: #1 Nov 1 00:20:51.096146 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 1 00:20:51.096162 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 1 00:20:51.096176 kernel: smp: Brought up 1 node, 2 CPUs Nov 1 00:20:51.096191 kernel: smpboot: Max logical packages: 1 Nov 1 00:20:51.096206 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Nov 1 00:20:51.096220 kernel: devtmpfs: initialized Nov 1 00:20:51.096235 kernel: x86/mm: Memory block size: 128MB Nov 1 00:20:51.096252 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 1 00:20:51.096267 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:20:51.096282 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 1 00:20:51.096297 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:20:51.096311 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:20:51.096326 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:20:51.096340 kernel: audit: type=2000 audit(1761956449.028:1): state=initialized audit_enabled=0 res=1 Nov 1 00:20:51.096355 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:20:51.096369 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:20:51.096387 kernel: cpuidle: using governor menu Nov 1 00:20:51.096401 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:20:51.096415 kernel: dca service started, version 1.12.1 Nov 1 00:20:51.096430 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 1 00:20:51.096445 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:20:51.096459 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:20:51.096474 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:20:51.096488 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:20:51.096503 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:20:51.096521 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:20:51.096536 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:20:51.096550 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:20:51.096565 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:20:51.096580 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 1 00:20:51.096594 kernel: ACPI: Interpreter enabled Nov 1 00:20:51.096609 kernel: ACPI: PM: (supports S0 S5) Nov 1 00:20:51.096624 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:20:51.096638 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:20:51.096656 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 1 00:20:51.096686 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 1 00:20:51.096701 kernel: iommu: Default domain type: Translated Nov 1 00:20:51.096716 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:20:51.096730 kernel: efivars: Registered efivars operations Nov 1 00:20:51.096745 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:20:51.096759 kernel: PCI: System does not support PCI Nov 1 00:20:51.096774 kernel: vgaarb: loaded Nov 1 00:20:51.096788 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 1 00:20:51.096806 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:20:51.096820 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:20:51.096835 kernel: pnp: PnP ACPI init Nov 1 00:20:51.096849 kernel: pnp: PnP ACPI: found 3 devices Nov 1 00:20:51.096864 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:20:51.096879 kernel: NET: Registered PF_INET protocol family Nov 1 00:20:51.096893 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 1 00:20:51.096908 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 1 00:20:51.096923 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:20:51.096940 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:20:51.096955 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 1 00:20:51.096969 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 1 00:20:51.096984 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.096998 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 1 00:20:51.097013 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:20:51.097027 kernel: NET: Registered PF_XDP protocol family Nov 1 00:20:51.097042 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:20:51.097056 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 1 00:20:51.097074 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Nov 1 00:20:51.097089 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 1 00:20:51.097103 kernel: Initialise system trusted keyrings Nov 1 00:20:51.097118 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 1 00:20:51.097132 kernel: Key type asymmetric registered Nov 1 00:20:51.097146 kernel: Asymmetric key parser 'x509' registered Nov 1 00:20:51.097161 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 1 00:20:51.097175 kernel: io scheduler mq-deadline registered Nov 1 00:20:51.097190 kernel: io scheduler kyber registered Nov 1 00:20:51.097207 kernel: io scheduler bfq registered Nov 1 00:20:51.097222 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:20:51.097236 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:20:51.097251 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:20:51.097266 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 1 00:20:51.097280 kernel: i8042: PNP: No PS/2 controller found. Nov 1 00:20:51.097462 kernel: rtc_cmos 00:02: registered as rtc0 Nov 1 00:20:51.097592 kernel: rtc_cmos 00:02: setting system clock to 2025-11-01T00:20:50 UTC (1761956450) Nov 1 00:20:51.098435 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 1 00:20:51.098460 kernel: intel_pstate: CPU model not supported Nov 1 00:20:51.098476 kernel: efifb: probing for efifb Nov 1 00:20:51.098490 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 1 00:20:51.098506 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 1 00:20:51.098521 kernel: efifb: scrolling: redraw Nov 1 00:20:51.098535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:20:51.098550 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:20:51.098565 kernel: fb0: EFI VGA frame buffer device Nov 1 00:20:51.098584 kernel: pstore: Using crash dump compression: deflate Nov 1 00:20:51.098599 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:20:51.098614 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:20:51.098628 kernel: Segment Routing with IPv6 Nov 1 00:20:51.098643 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:20:51.098658 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:20:51.099099 kernel: Key type dns_resolver registered Nov 1 00:20:51.099117 kernel: IPI shorthand broadcast: enabled Nov 1 00:20:51.099130 kernel: sched_clock: Marking stable (874037300, 47887700)->(1137975900, -216050900) Nov 1 00:20:51.099149 kernel: registered taskstats version 1 Nov 1 00:20:51.099163 kernel: Loading compiled-in X.509 certificates Nov 1 00:20:51.099183 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: cc4975b6f5d9e3149f7a95c8552b8f9120c3a1f4' Nov 1 00:20:51.099199 kernel: Key type .fscrypt registered Nov 1 00:20:51.099213 kernel: Key type fscrypt-provisioning registered Nov 1 00:20:51.099225 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:20:51.099238 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:20:51.099251 kernel: ima: No architecture policies found Nov 1 00:20:51.099263 kernel: clk: Disabling unused clocks Nov 1 00:20:51.099280 kernel: Freeing unused kernel image (initmem) memory: 42884K Nov 1 00:20:51.099297 kernel: Write protecting the kernel read-only data: 36864k Nov 1 00:20:51.099310 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Nov 1 00:20:51.099324 kernel: Run /init as init process Nov 1 00:20:51.099338 kernel: with arguments: Nov 1 00:20:51.099353 kernel: /init Nov 1 00:20:51.099367 kernel: with environment: Nov 1 00:20:51.099382 kernel: HOME=/ Nov 1 00:20:51.099398 kernel: TERM=linux Nov 1 00:20:51.099421 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:20:51.099442 systemd[1]: Detected virtualization microsoft. Nov 1 00:20:51.099459 systemd[1]: Detected architecture x86-64. Nov 1 00:20:51.099476 systemd[1]: Running in initrd. Nov 1 00:20:51.099491 systemd[1]: No hostname configured, using default hostname. Nov 1 00:20:51.099507 systemd[1]: Hostname set to . Nov 1 00:20:51.099522 systemd[1]: Initializing machine ID from random generator. Nov 1 00:20:51.099541 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:20:51.099557 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:20:51.099571 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:20:51.099585 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:20:51.099600 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:20:51.099616 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:20:51.099631 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:20:51.099650 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 1 00:20:51.099664 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 1 00:20:51.101707 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:20:51.101719 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:20:51.101732 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:20:51.101743 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:20:51.101756 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:20:51.101765 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:20:51.101777 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:20:51.101789 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:20:51.101798 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:20:51.101809 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 1 00:20:51.101819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:20:51.101828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:20:51.101839 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:20:51.101848 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:20:51.101860 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:20:51.101872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:20:51.101883 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:20:51.101892 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:20:51.101903 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:20:51.101912 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:20:51.101924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:51.101958 systemd-journald[176]: Collecting audit messages is disabled. Nov 1 00:20:51.101986 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:20:51.101997 systemd-journald[176]: Journal started Nov 1 00:20:51.102019 systemd-journald[176]: Runtime Journal (/run/log/journal/9cd4db197e9a48a4883ee88c92a6caea) is 8.0M, max 158.8M, 150.8M free. Nov 1 00:20:51.116678 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:20:51.116866 systemd-modules-load[177]: Inserted module 'overlay' Nov 1 00:20:51.119506 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:20:51.123257 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:20:51.128981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:51.146886 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:51.160891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:20:51.165358 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:20:51.179635 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 1 00:20:51.179921 kernel: Bridge firewalling registered Nov 1 00:20:51.182179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:20:51.185191 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:20:51.185678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:51.188305 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:20:51.215701 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:20:51.224269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:20:51.232215 dracut-cmdline[201]: dracut-dracut-053 Nov 1 00:20:51.232215 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ade41980c48607de3d2d18dc444731ec5388853e3a75ed2d5a13ce616b36f478 Nov 1 00:20:51.250118 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:20:51.265006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:20:51.282526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:20:51.291934 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:20:51.299007 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:20:51.333694 kernel: SCSI subsystem initialized Nov 1 00:20:51.338478 systemd-resolved[277]: Positive Trust Anchors: Nov 1 00:20:51.340867 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:20:51.340925 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:20:51.365863 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:20:51.348299 systemd-resolved[277]: Defaulting to hostname 'linux'. Nov 1 00:20:51.368691 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:20:51.374890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:20:51.387694 kernel: iscsi: registered transport (tcp) Nov 1 00:20:51.408536 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:20:51.408614 kernel: QLogic iSCSI HBA Driver Nov 1 00:20:51.444343 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:20:51.455845 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:20:51.488206 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:20:51.488290 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:20:51.491799 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 1 00:20:51.532699 kernel: raid6: avx512x4 gen() 18427 MB/s Nov 1 00:20:51.551691 kernel: raid6: avx512x2 gen() 18372 MB/s Nov 1 00:20:51.570681 kernel: raid6: avx512x1 gen() 18399 MB/s Nov 1 00:20:51.589681 kernel: raid6: avx2x4 gen() 18238 MB/s Nov 1 00:20:51.608681 kernel: raid6: avx2x2 gen() 18270 MB/s Nov 1 00:20:51.628486 kernel: raid6: avx2x1 gen() 14163 MB/s Nov 1 00:20:51.628527 kernel: raid6: using algorithm avx512x4 gen() 18427 MB/s Nov 1 00:20:51.652317 kernel: raid6: .... xor() 8315 MB/s, rmw enabled Nov 1 00:20:51.652340 kernel: raid6: using avx512x2 recovery algorithm Nov 1 00:20:51.675694 kernel: xor: automatically using best checksumming function avx Nov 1 00:20:51.826702 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:20:51.836423 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:20:51.846845 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:20:51.860587 systemd-udevd[395]: Using default interface naming scheme 'v255'. Nov 1 00:20:51.865115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:20:51.881277 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:20:51.893947 dracut-pre-trigger[397]: rd.md=0: removing MD RAID activation Nov 1 00:20:51.920932 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:20:51.932888 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:20:51.977251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:20:52.003135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:20:52.026017 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:20:52.033371 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:20:52.040685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:20:52.047616 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:20:52.059891 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:20:52.077694 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:20:52.080000 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:20:52.099146 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:20:52.099376 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:52.122871 kernel: AVX2 version of gcm_enc/dec engaged. Nov 1 00:20:52.122904 kernel: hv_vmbus: Vmbus version:5.2 Nov 1 00:20:52.103521 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:52.106537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:20:52.106803 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:52.110139 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:52.125128 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:20:52.753691 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 1 00:20:52.753714 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 1 00:20:52.753729 kernel: AES CTR mode by8 optimization enabled Nov 1 00:20:52.753740 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 1 00:20:52.753750 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 1 00:20:52.753764 kernel: PTP clock support registered Nov 1 00:20:52.753774 kernel: hv_utils: Registering HyperV Utility Driver Nov 1 00:20:52.753788 kernel: hv_vmbus: registering driver hv_utils Nov 1 00:20:52.753801 kernel: hv_utils: Heartbeat IC version 3.0 Nov 1 00:20:52.753811 kernel: hv_utils: Shutdown IC version 3.2 Nov 1 00:20:52.753824 kernel: hv_utils: TimeSync IC version 4.0 Nov 1 00:20:52.753835 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 1 00:20:52.728157 systemd-resolved[277]: Clock change detected. Flushing caches. Nov 1 00:20:52.773777 kernel: hv_vmbus: registering driver hv_storvsc Nov 1 00:20:52.776607 kernel: scsi host1: storvsc_host_t Nov 1 00:20:52.780551 kernel: scsi host0: storvsc_host_t Nov 1 00:20:52.785567 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 1 00:20:52.788552 kernel: hv_vmbus: registering driver hv_netvsc Nov 1 00:20:52.790739 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:20:52.796782 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 1 00:20:52.808938 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:20:52.821556 kernel: hv_vmbus: registering driver hid_hyperv Nov 1 00:20:52.834010 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 1 00:20:52.840996 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 1 00:20:52.850550 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 1 00:20:52.851053 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:20:52.854557 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 1 00:20:52.857934 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:20:52.874022 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 1 00:20:52.874363 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 1 00:20:52.876735 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 1 00:20:52.881035 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 1 00:20:52.881333 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 1 00:20:52.887558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:52.891573 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 1 00:20:52.995565 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: VF slot 1 added Nov 1 00:20:53.004426 kernel: hv_vmbus: registering driver hv_pci Nov 1 00:20:53.004517 kernel: hv_pci 506f2f26-c35d-4f20-b46e-a7891081c746: PCI VMBus probing: Using version 0x10004 Nov 1 00:20:53.253722 kernel: hv_pci 506f2f26-c35d-4f20-b46e-a7891081c746: PCI host bridge to bus c35d:00 Nov 1 00:20:53.254136 kernel: pci_bus c35d:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 1 00:20:53.257574 kernel: pci_bus c35d:00: No busn resource found for root bus, will use [bus 00-ff] Nov 1 00:20:53.262631 kernel: pci c35d:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 1 00:20:53.267619 kernel: pci c35d:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:20:53.271878 kernel: pci c35d:00:02.0: enabling Extended Tags Nov 1 00:20:53.283563 kernel: pci c35d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at c35d:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 1 00:20:53.289427 kernel: pci_bus c35d:00: busn_res: [bus 00-ff] end is updated to 00 Nov 1 00:20:53.289747 kernel: pci c35d:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 1 00:20:53.464202 kernel: mlx5_core c35d:00:02.0: enabling device (0000 -> 0002) Nov 1 00:20:53.468555 kernel: mlx5_core c35d:00:02.0: firmware version: 14.30.5000 Nov 1 00:20:53.683218 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: VF registering: eth1 Nov 1 00:20:53.683605 kernel: mlx5_core c35d:00:02.0 eth1: joined to eth0 Nov 1 00:20:53.688640 kernel: mlx5_core c35d:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 1 00:20:53.697595 kernel: mlx5_core c35d:00:02.0 enP50013s1: renamed from eth1 Nov 1 00:20:53.697797 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (439) Nov 1 00:20:53.737445 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 1 00:20:53.746177 kernel: BTRFS: device fsid 5d5360dd-ce7d-46d0-bc66-772f2084023b devid 1 transid 34 /dev/sda3 scanned by (udev-worker) (456) Nov 1 00:20:53.747991 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 1 00:20:53.763952 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 1 00:20:53.774012 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 1 00:20:53.774134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 1 00:20:53.792727 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:20:53.809558 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:53.817556 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:53.826549 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:54.828625 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 1 00:20:54.828692 disk-uuid[599]: The operation has completed successfully. Nov 1 00:20:54.917332 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:20:54.917460 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:20:54.947690 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 1 00:20:54.954084 sh[712]: Success Nov 1 00:20:54.982583 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 1 00:20:55.351436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:20:55.365773 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 1 00:20:55.370915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 1 00:20:55.412220 kernel: BTRFS info (device dm-0): first mount of filesystem 5d5360dd-ce7d-46d0-bc66-772f2084023b Nov 1 00:20:55.412307 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:55.416106 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 1 00:20:55.418886 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:20:55.421361 kernel: BTRFS info (device dm-0): using free space tree Nov 1 00:20:55.809689 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 1 00:20:55.815434 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:20:55.824699 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:20:55.835687 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:20:55.855139 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:55.855225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:55.857378 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:55.915393 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:20:55.925542 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:55.931735 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:20:55.941552 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:55.951569 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:20:55.964782 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:20:55.970960 systemd-networkd[888]: lo: Link UP Nov 1 00:20:55.970964 systemd-networkd[888]: lo: Gained carrier Nov 1 00:20:55.973730 systemd-networkd[888]: Enumeration completed Nov 1 00:20:55.973808 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:20:55.976128 systemd-networkd[888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:55.976132 systemd-networkd[888]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:20:55.977744 systemd[1]: Reached target network.target - Network. Nov 1 00:20:56.037561 kernel: mlx5_core c35d:00:02.0 enP50013s1: Link up Nov 1 00:20:56.067569 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: Data path switched to VF: enP50013s1 Nov 1 00:20:56.068574 systemd-networkd[888]: enP50013s1: Link UP Nov 1 00:20:56.068758 systemd-networkd[888]: eth0: Link UP Nov 1 00:20:56.068988 systemd-networkd[888]: eth0: Gained carrier Nov 1 00:20:56.069005 systemd-networkd[888]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:20:56.083423 systemd-networkd[888]: enP50013s1: Gained carrier Nov 1 00:20:56.113607 systemd-networkd[888]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 1 00:20:56.864103 ignition[896]: Ignition 2.19.0 Nov 1 00:20:56.864114 ignition[896]: Stage: fetch-offline Nov 1 00:20:56.864160 ignition[896]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:56.868456 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:20:56.864171 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:56.864277 ignition[896]: parsed url from cmdline: "" Nov 1 00:20:56.864282 ignition[896]: no config URL provided Nov 1 00:20:56.864288 ignition[896]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:56.864298 ignition[896]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:56.864305 ignition[896]: failed to fetch config: resource requires networking Nov 1 00:20:56.866347 ignition[896]: Ignition finished successfully Nov 1 00:20:56.886719 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 1 00:20:56.909055 ignition[904]: Ignition 2.19.0 Nov 1 00:20:56.909067 ignition[904]: Stage: fetch Nov 1 00:20:56.909282 ignition[904]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:56.909295 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:56.909396 ignition[904]: parsed url from cmdline: "" Nov 1 00:20:56.909399 ignition[904]: no config URL provided Nov 1 00:20:56.909404 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:20:56.909411 ignition[904]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:20:56.909437 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 1 00:20:56.999371 ignition[904]: GET result: OK Nov 1 00:20:56.999487 ignition[904]: config has been read from IMDS userdata Nov 1 00:20:56.999529 ignition[904]: parsing config with SHA512: d572d0fa8b1f7b96b9d754252b4676e9deffe761078187e03c9706b5b1b59cf65a3188510f2f859d2cc31177e57d1c6709b9ba523de76cbf9d1b516e7f8e3e3f Nov 1 00:20:57.006251 unknown[904]: fetched base config from "system" Nov 1 00:20:57.006744 ignition[904]: fetch: fetch complete Nov 1 00:20:57.006268 unknown[904]: fetched base config from "system" Nov 1 00:20:57.006750 ignition[904]: fetch: fetch passed Nov 1 00:20:57.006278 unknown[904]: fetched user config from "azure" Nov 1 00:20:57.006799 ignition[904]: Ignition finished successfully Nov 1 00:20:57.012507 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 1 00:20:57.029704 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:20:57.050893 ignition[910]: Ignition 2.19.0 Nov 1 00:20:57.050904 ignition[910]: Stage: kargs Nov 1 00:20:57.053489 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:20:57.051140 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:57.051154 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:57.052022 ignition[910]: kargs: kargs passed Nov 1 00:20:57.052077 ignition[910]: Ignition finished successfully Nov 1 00:20:57.080709 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:20:57.097993 ignition[916]: Ignition 2.19.0 Nov 1 00:20:57.098004 ignition[916]: Stage: disks Nov 1 00:20:57.101229 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:20:57.098224 ignition[916]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:57.105803 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:20:57.098240 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:57.108289 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:20:57.099486 ignition[916]: disks: disks passed Nov 1 00:20:57.108707 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:20:57.099551 ignition[916]: Ignition finished successfully Nov 1 00:20:57.109309 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:20:57.112351 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:20:57.148828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:20:57.212591 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 1 00:20:57.218014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:20:57.228653 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:20:57.321586 kernel: EXT4-fs (sda9): mounted filesystem cb9d31b8-5e00-461c-b45e-c304d1f8091c r/w with ordered data mode. Quota mode: none. Nov 1 00:20:57.322146 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:20:57.325092 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:20:57.365645 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:57.382552 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (935) Nov 1 00:20:57.391607 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:57.391694 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:57.395145 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:57.396714 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:20:57.403315 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 1 00:20:57.412102 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:20:57.418063 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:57.412146 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:20:57.425742 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:57.428322 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:20:57.437724 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:20:58.060840 systemd-networkd[888]: eth0: Gained IPv6LL Nov 1 00:20:58.145367 coreos-metadata[950]: Nov 01 00:20:58.145 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:20:58.149627 coreos-metadata[950]: Nov 01 00:20:58.147 INFO Fetch successful Nov 1 00:20:58.149627 coreos-metadata[950]: Nov 01 00:20:58.147 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:20:58.158610 coreos-metadata[950]: Nov 01 00:20:58.158 INFO Fetch successful Nov 1 00:20:58.161211 coreos-metadata[950]: Nov 01 00:20:58.161 INFO wrote hostname ci-4081.3.6-n-534d15dd10 to /sysroot/etc/hostname Nov 1 00:20:58.162905 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:20:58.429131 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:20:58.477914 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:20:58.497367 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:20:58.504495 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:20:59.544865 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:20:59.555734 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:20:59.562819 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:20:59.569969 kernel: BTRFS info (device sda6): last unmount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:59.573807 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:20:59.597714 ignition[1053]: INFO : Ignition 2.19.0 Nov 1 00:20:59.597714 ignition[1053]: INFO : Stage: mount Nov 1 00:20:59.605515 ignition[1053]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:59.605515 ignition[1053]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:59.605515 ignition[1053]: INFO : mount: mount passed Nov 1 00:20:59.605515 ignition[1053]: INFO : Ignition finished successfully Nov 1 00:20:59.601294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:20:59.620711 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:20:59.630103 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:20:59.655563 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1061) Nov 1 00:20:59.659729 kernel: BTRFS info (device sda6): first mount of filesystem 92f9034d-7d56-482a-b71a-15e476525571 Nov 1 00:20:59.659808 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:20:59.668303 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:20:59.668322 kernel: BTRFS info (device sda6): using free space tree Nov 1 00:20:59.677548 kernel: BTRFS info (device sda6): auto enabling async discard Nov 1 00:20:59.679353 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:20:59.703433 ignition[1082]: INFO : Ignition 2.19.0 Nov 1 00:20:59.703433 ignition[1082]: INFO : Stage: files Nov 1 00:20:59.708069 ignition[1082]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:20:59.708069 ignition[1082]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:20:59.708069 ignition[1082]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:20:59.747079 ignition[1082]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:20:59.747079 ignition[1082]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:20:59.878646 ignition[1082]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:20:59.883109 ignition[1082]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:20:59.883109 ignition[1082]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:20:59.879146 unknown[1082]: wrote ssh authorized keys file for user: core Nov 1 00:20:59.909708 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:20:59.917449 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 1 00:20:59.995176 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:21:00.058449 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 1 00:21:00.063899 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:00.068601 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:00.068601 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:00.078104 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:00.082903 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:00.087766 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:00.092605 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:00.097275 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:00.115762 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.121076 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 1 00:21:00.411619 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:21:00.731868 ignition[1082]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 1 00:21:00.731868 ignition[1082]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:21:00.758933 ignition[1082]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:00.764479 ignition[1082]: INFO : files: files passed Nov 1 00:21:00.764479 ignition[1082]: INFO : Ignition finished successfully Nov 1 00:21:00.761479 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:21:00.802383 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:21:00.806736 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:21:00.812028 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:21:00.812571 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:21:00.836841 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.836841 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.845364 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:00.842480 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:00.847331 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:21:00.866754 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:21:00.890383 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:21:00.893019 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:21:00.900008 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:21:00.902990 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:21:00.907234 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:21:00.917791 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:21:00.932391 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:00.943784 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:21:00.956061 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:00.962205 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:00.965522 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:21:00.973573 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:21:00.973721 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:00.979818 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:21:00.988420 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:21:00.990992 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:21:00.996031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:01.002091 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:01.008209 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:21:01.014088 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:01.023324 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:21:01.029311 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:21:01.034850 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:21:01.039524 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:21:01.039734 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:01.044813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:01.050232 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:01.056018 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:21:01.058808 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:01.062384 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:21:01.062520 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:01.079872 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:21:01.080087 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:01.089899 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:21:01.090079 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:21:01.095270 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 1 00:21:01.095375 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 1 00:21:01.117733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:21:01.123007 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:21:01.123167 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:01.135438 ignition[1134]: INFO : Ignition 2.19.0 Nov 1 00:21:01.135438 ignition[1134]: INFO : Stage: umount Nov 1 00:21:01.135438 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:01.135438 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 1 00:21:01.135438 ignition[1134]: INFO : umount: umount passed Nov 1 00:21:01.135438 ignition[1134]: INFO : Ignition finished successfully Nov 1 00:21:01.149951 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:21:01.154738 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:21:01.155054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:01.163837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:21:01.164041 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:01.174383 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:21:01.177038 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:21:01.183223 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:21:01.183519 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:21:01.188956 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:21:01.189007 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:21:01.194443 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 1 00:21:01.194495 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 1 00:21:01.199772 systemd[1]: Stopped target network.target - Network. Nov 1 00:21:01.202069 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:21:01.202114 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:01.205410 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:21:01.208127 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:21:01.212619 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:01.215654 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:21:01.216548 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:21:01.217023 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:21:01.217066 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:01.217469 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:21:01.217502 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:01.218367 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:21:01.218414 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:21:01.218832 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:21:01.218867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:01.219417 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:21:01.219764 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:21:01.220875 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:21:01.221415 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:21:01.221514 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:21:01.264162 systemd-networkd[888]: eth0: DHCPv6 lease lost Nov 1 00:21:01.265896 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:21:01.266004 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:21:01.282477 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:21:01.282616 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:21:01.290954 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:21:01.291002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:01.312732 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:21:01.319766 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:21:01.319841 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:01.326181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:01.326243 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:01.331614 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:21:01.331677 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:01.334901 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:21:01.334958 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:01.338505 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:01.373145 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:21:01.373289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:01.383471 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:21:01.383970 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:01.388836 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:21:01.388879 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:01.391654 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:21:01.391698 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:01.394685 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:21:01.430570 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: Data path switched from VF: enP50013s1 Nov 1 00:21:01.394731 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:01.400213 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:01.400269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:01.430164 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:21:01.436458 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:21:01.436550 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:01.442653 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:01.442761 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:01.466209 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:21:01.466353 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:21:01.471949 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:21:01.472032 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:21:01.733285 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:21:01.733444 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:21:01.741473 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:21:01.744492 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:21:01.744565 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:01.762691 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:21:02.201270 systemd[1]: Switching root. Nov 1 00:21:02.295570 systemd-journald[176]: Journal stopped Nov 1 00:21:10.246466 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 1 00:21:10.246498 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:21:10.246511 kernel: SELinux: policy capability open_perms=1 Nov 1 00:21:10.246524 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:21:10.246541 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:21:10.246552 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:21:10.246566 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:21:10.246584 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:21:10.246593 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:21:10.246605 kernel: audit: type=1403 audit(1761956463.807:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:21:10.246617 systemd[1]: Successfully loaded SELinux policy in 246.119ms. Nov 1 00:21:10.246630 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.966ms. Nov 1 00:21:10.246644 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 1 00:21:10.246656 systemd[1]: Detected virtualization microsoft. Nov 1 00:21:10.246671 systemd[1]: Detected architecture x86-64. Nov 1 00:21:10.246682 systemd[1]: Detected first boot. Nov 1 00:21:10.246696 systemd[1]: Hostname set to . Nov 1 00:21:10.246707 systemd[1]: Initializing machine ID from random generator. Nov 1 00:21:10.246719 zram_generator::config[1178]: No configuration found. Nov 1 00:21:10.246735 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:21:10.246748 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:21:10.246760 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:21:10.246774 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:21:10.246786 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:21:10.246801 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:21:10.246816 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:21:10.246834 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:21:10.246849 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:21:10.246864 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:21:10.246879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:21:10.246894 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:21:10.246910 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:10.246927 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:10.246943 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:21:10.246964 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:21:10.246981 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:21:10.246996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:10.247012 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:21:10.247025 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:10.247038 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:21:10.248108 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:21:10.248128 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:21:10.248145 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:21:10.248156 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:10.248169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:10.248180 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:10.248193 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:10.248204 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:21:10.248217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:21:10.248230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:10.248244 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:10.248255 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:10.248268 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:21:10.248279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:21:10.248295 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:21:10.248306 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:21:10.248319 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.248330 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:21:10.248345 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:21:10.248356 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:21:10.248370 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:21:10.248383 systemd[1]: Reached target machines.target - Containers. Nov 1 00:21:10.248397 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:21:10.248410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:10.248421 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:10.248434 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:21:10.248445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:10.248458 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:10.248469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:10.248483 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:21:10.248494 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:10.248510 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:10.248520 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:21:10.248543 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:21:10.248557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:21:10.248568 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:21:10.248582 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:10.248592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:10.248606 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:21:10.248622 kernel: fuse: init (API version 7.39) Nov 1 00:21:10.248634 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:21:10.248644 kernel: loop: module loaded Nov 1 00:21:10.248657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:10.248668 systemd[1]: verity-setup.service: Deactivated successfully. Nov 1 00:21:10.248681 systemd[1]: Stopped verity-setup.service. Nov 1 00:21:10.248692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:10.248706 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:21:10.248742 systemd-journald[1263]: Collecting audit messages is disabled. Nov 1 00:21:10.248769 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:21:10.248782 systemd-journald[1263]: Journal started Nov 1 00:21:10.248808 systemd-journald[1263]: Runtime Journal (/run/log/journal/6eafe67f8c6f4b5a82a0cfe3ebd34011) is 8.0M, max 158.8M, 150.8M free. Nov 1 00:21:09.462170 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:21:09.605963 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 1 00:21:09.606342 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:21:10.257553 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:10.260335 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:21:10.263354 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:21:10.266485 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:21:10.269926 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:21:10.272906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:21:10.276229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:10.279964 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:21:10.280113 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:21:10.286504 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:10.286694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:10.290291 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:10.290460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:10.294017 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:21:10.294198 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:21:10.297759 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:10.298031 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:10.301708 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:10.305193 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:10.308973 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:21:10.335067 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:21:10.371739 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:21:10.379559 kernel: ACPI: bus type drm_connector registered Nov 1 00:21:10.379766 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:21:10.383088 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:10.383135 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:10.391311 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 1 00:21:10.405733 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:21:10.418733 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:21:10.421937 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:10.467719 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:21:10.472232 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:21:10.476099 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:10.477594 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:21:10.480803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:10.482232 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:10.490520 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:21:10.500697 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:21:10.505840 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:10.506078 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:10.509638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:10.513789 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:21:10.517873 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:21:10.523322 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:21:10.538735 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 1 00:21:10.544608 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:21:10.551907 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:21:10.552324 systemd-journald[1263]: Time spent on flushing to /var/log/journal/6eafe67f8c6f4b5a82a0cfe3ebd34011 is 20.162ms for 956 entries. Nov 1 00:21:10.552324 systemd-journald[1263]: System Journal (/var/log/journal/6eafe67f8c6f4b5a82a0cfe3ebd34011) is 8.0M, max 2.6G, 2.6G free. Nov 1 00:21:10.683423 systemd-journald[1263]: Received client request to flush runtime journal. Nov 1 00:21:10.683490 kernel: loop0: detected capacity change from 0 to 142488 Nov 1 00:21:10.562856 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 1 00:21:10.569437 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 1 00:21:10.685781 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:21:10.707842 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:21:10.708709 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 1 00:21:10.799525 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:11.232940 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:21:11.245715 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:11.293563 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:21:11.378138 kernel: loop1: detected capacity change from 0 to 219144 Nov 1 00:21:11.448026 kernel: loop2: detected capacity change from 0 to 140768 Nov 1 00:21:11.463338 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Nov 1 00:21:11.463788 systemd-tmpfiles[1331]: ACLs are not supported, ignoring. Nov 1 00:21:11.469050 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:11.742089 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:21:11.751750 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:11.773546 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Nov 1 00:21:12.143560 kernel: loop3: detected capacity change from 0 to 31056 Nov 1 00:21:12.441598 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:12.455115 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:12.527018 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:21:12.563747 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:21:12.646748 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:21:12.643985 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:21:12.679665 kernel: loop4: detected capacity change from 0 to 142488 Nov 1 00:21:12.702629 kernel: hv_vmbus: registering driver hv_balloon Nov 1 00:21:12.716663 kernel: loop5: detected capacity change from 0 to 219144 Nov 1 00:21:12.722567 kernel: hv_vmbus: registering driver hyperv_fb Nov 1 00:21:12.727579 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 1 00:21:12.732561 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 1 00:21:12.738028 kernel: Console: switching to colour dummy device 80x25 Nov 1 00:21:12.743419 kernel: Console: switching to colour frame buffer device 128x48 Nov 1 00:21:12.748399 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 1 00:21:12.756556 kernel: loop6: detected capacity change from 0 to 140768 Nov 1 00:21:12.762910 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:12.879356 kernel: loop7: detected capacity change from 0 to 31056 Nov 1 00:21:12.875272 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:12.875509 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:12.896721 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:12.902303 (sd-merge)[1383]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 1 00:21:12.903935 (sd-merge)[1383]: Merged extensions into '/usr'. Nov 1 00:21:12.947552 systemd-networkd[1344]: lo: Link UP Nov 1 00:21:12.947563 systemd-networkd[1344]: lo: Gained carrier Nov 1 00:21:12.950848 systemd-networkd[1344]: Enumeration completed Nov 1 00:21:12.951272 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:12.951277 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:12.952295 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:12.967513 systemd[1]: Reloading requested from client PID 1313 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:21:12.967541 systemd[1]: Reloading... Nov 1 00:21:13.041789 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1341) Nov 1 00:21:13.074696 kernel: mlx5_core c35d:00:02.0 enP50013s1: Link up Nov 1 00:21:13.100553 kernel: hv_netvsc 6045bde1-0a6d-6045-bde1-0a6d6045bde1 eth0: Data path switched to VF: enP50013s1 Nov 1 00:21:13.107818 systemd-networkd[1344]: enP50013s1: Link UP Nov 1 00:21:13.108138 systemd-networkd[1344]: eth0: Link UP Nov 1 00:21:13.108424 systemd-networkd[1344]: eth0: Gained carrier Nov 1 00:21:13.108646 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:13.118790 systemd-networkd[1344]: enP50013s1: Gained carrier Nov 1 00:21:13.123363 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 1 00:21:13.123500 zram_generator::config[1448]: No configuration found. Nov 1 00:21:13.152494 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 1 00:21:13.339394 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:13.419482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 1 00:21:13.423788 systemd[1]: Reloading finished in 455 ms. Nov 1 00:21:13.457379 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:21:13.479698 systemd[1]: Starting ensure-sysext.service... Nov 1 00:21:13.484591 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:21:13.487683 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:21:13.493697 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:13.497944 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 1 00:21:13.507276 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 1 00:21:13.511021 systemd[1]: Reloading requested from client PID 1515 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:21:13.511038 systemd[1]: Reloading... Nov 1 00:21:13.572628 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:21:13.573159 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:21:13.575500 systemd-tmpfiles[1518]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:21:13.575991 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Nov 1 00:21:13.576086 systemd-tmpfiles[1518]: ACLs are not supported, ignoring. Nov 1 00:21:13.589630 zram_generator::config[1549]: No configuration found. Nov 1 00:21:13.663348 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:13.663364 systemd-tmpfiles[1518]: Skipping /boot Nov 1 00:21:13.675715 systemd-tmpfiles[1518]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:13.675728 systemd-tmpfiles[1518]: Skipping /boot Nov 1 00:21:13.710582 lvm[1521]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:13.756505 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:13.834310 systemd[1]: Reloading finished in 322 ms. Nov 1 00:21:13.853918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:13.854138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:13.865152 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:21:13.869151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:13.872831 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 1 00:21:13.883173 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:13.893855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:21:13.922796 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:21:13.928191 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 1 00:21:13.936642 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:21:13.939139 lvm[1620]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 1 00:21:13.943195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:13.951830 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:21:13.960840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:13.974158 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:13.974431 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:13.981781 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:13.991445 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:13.999184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:14.002315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:14.002486 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:14.006276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:14.006482 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:14.015600 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 1 00:21:14.034014 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:14.034212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:14.038340 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:14.038524 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:14.053723 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:21:14.061332 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:14.061665 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:14.066806 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:14.071023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:14.076257 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:14.081644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:14.081841 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:14.082011 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:21:14.082395 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:14.083454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:14.083593 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:14.084067 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:14.084187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:14.092048 systemd[1]: Finished ensure-sysext.service. Nov 1 00:21:14.094179 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:14.094787 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:14.096568 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:14.100974 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:14.101097 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:14.101850 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:14.165525 systemd-resolved[1622]: Positive Trust Anchors: Nov 1 00:21:14.165557 systemd-resolved[1622]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:14.165602 systemd-resolved[1622]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:14.197488 systemd-resolved[1622]: Using system hostname 'ci-4081.3.6-n-534d15dd10'. Nov 1 00:21:14.199256 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:14.203020 systemd[1]: Reached target network.target - Network. Nov 1 00:21:14.205633 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:14.264317 augenrules[1658]: No rules Nov 1 00:21:14.266017 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:21:14.390764 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:21:14.572656 systemd-networkd[1344]: eth0: Gained IPv6LL Nov 1 00:21:14.575577 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:21:14.579444 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:21:14.760202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:16.828524 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:21:16.833841 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:21:20.488648 ldconfig[1308]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:21:20.501904 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:21:20.510778 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:21:20.538295 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:21:20.542041 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:20.545019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:21:20.548594 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:21:20.552402 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:21:20.555389 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:21:20.561589 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:21:20.564843 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:21:20.564883 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:20.567245 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:20.570406 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:21:20.574992 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:21:20.585459 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:21:20.588944 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:21:20.591871 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:20.594568 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:20.597096 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:20.597134 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:20.604651 systemd[1]: Starting chronyd.service - NTP client/server... Nov 1 00:21:20.610667 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:21:20.618804 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 1 00:21:20.623856 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:21:20.633233 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:21:20.642335 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:21:20.646222 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:21:20.646281 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 1 00:21:20.647642 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 1 00:21:20.652976 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 1 00:21:20.660634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:20.664921 jq[1679]: false Nov 1 00:21:20.670124 (chronyd)[1675]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 1 00:21:20.672105 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:21:20.679725 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:21:20.685034 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:21:20.692665 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:21:20.700335 KVP[1683]: KVP starting; pid is:1683 Nov 1 00:21:20.702201 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:21:20.715772 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:21:20.715956 chronyd[1694]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 1 00:21:20.719231 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:21:20.719707 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:21:20.721167 kernel: hv_utils: KVP IC version 4.0 Nov 1 00:21:20.721039 KVP[1683]: KVP LIC Version: 3.1 Nov 1 00:21:20.726759 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:21:20.735715 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:21:20.747606 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:21:20.748142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:21:20.751966 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:21:20.753591 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:21:20.756349 extend-filesystems[1682]: Found loop4 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found loop5 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found loop6 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found loop7 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda1 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda2 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda3 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found usr Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda4 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda6 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda7 Nov 1 00:21:20.762867 extend-filesystems[1682]: Found sda9 Nov 1 00:21:20.762867 extend-filesystems[1682]: Checking size of /dev/sda9 Nov 1 00:21:20.780439 chronyd[1694]: Timezone right/UTC failed leap second check, ignoring Nov 1 00:21:20.780662 chronyd[1694]: Loaded seccomp filter (level 2) Nov 1 00:21:20.792483 jq[1698]: true Nov 1 00:21:20.799431 update_engine[1695]: I20251101 00:21:20.799335 1695 main.cc:92] Flatcar Update Engine starting Nov 1 00:21:20.805936 systemd[1]: Started chronyd.service - NTP client/server. Nov 1 00:21:20.809216 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:21:20.809452 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:21:20.826236 (ntainerd)[1713]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:21:20.831867 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:21:20.835597 jq[1712]: true Nov 1 00:21:21.729634 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:21.734804 (kubelet)[1746]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:22.248478 kubelet[1746]: E1101 00:21:22.248385 1746 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:22.250747 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:22.250966 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:22.475674 tar[1704]: linux-amd64/LICENSE Nov 1 00:21:22.476045 tar[1704]: linux-amd64/helm Nov 1 00:21:22.527700 bash[1753]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:22.530741 extend-filesystems[1682]: Old size kept for /dev/sda9 Nov 1 00:21:22.530741 extend-filesystems[1682]: Found sr0 Nov 1 00:21:22.528920 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:21:22.533776 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:21:22.533993 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:21:22.552440 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:21:22.590562 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (1767) Nov 1 00:21:23.026693 systemd-logind[1692]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:21:23.029743 systemd-logind[1692]: New seat seat0. Nov 1 00:21:23.030737 tar[1704]: linux-amd64/README.md Nov 1 00:21:23.030552 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:21:23.065791 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:21:23.154594 dbus-daemon[1678]: [system] SELinux support is enabled Nov 1 00:21:23.154835 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:21:23.163478 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:21:23.163523 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:21:23.168886 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:21:23.168914 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:21:23.174983 dbus-daemon[1678]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:21:23.177068 update_engine[1695]: I20251101 00:21:23.177011 1695 update_check_scheduler.cc:74] Next update check in 2m53s Nov 1 00:21:23.177449 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:21:23.191856 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:21:23.236252 sshd_keygen[1710]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:21:23.248516 coreos-metadata[1677]: Nov 01 00:21:23.248 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 1 00:21:23.252342 coreos-metadata[1677]: Nov 01 00:21:23.252 INFO Fetch successful Nov 1 00:21:23.252620 coreos-metadata[1677]: Nov 01 00:21:23.252 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 1 00:21:23.257325 coreos-metadata[1677]: Nov 01 00:21:23.257 INFO Fetch successful Nov 1 00:21:23.257914 coreos-metadata[1677]: Nov 01 00:21:23.257 INFO Fetching http://168.63.129.16/machine/0e091a4e-4836-46d4-bb07-76f0b768a780/b0d33d00%2D0938%2D418d%2Db969%2Dcce2a512c7a0.%5Fci%2D4081.3.6%2Dn%2D534d15dd10?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 1 00:21:23.259672 coreos-metadata[1677]: Nov 01 00:21:23.259 INFO Fetch successful Nov 1 00:21:23.259914 coreos-metadata[1677]: Nov 01 00:21:23.259 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 1 00:21:23.270364 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:21:23.271834 coreos-metadata[1677]: Nov 01 00:21:23.271 INFO Fetch successful Nov 1 00:21:23.284443 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:21:23.289652 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 1 00:21:23.304649 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:21:23.305203 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:21:23.310788 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 1 00:21:23.320038 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:21:23.327971 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:21:23.336720 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 1 00:21:23.351735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:21:23.367611 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:21:23.383505 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:21:23.387263 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:21:23.405599 locksmithd[1800]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:21:23.628580 containerd[1713]: time="2025-11-01T00:21:23.628426400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 1 00:21:23.653040 containerd[1713]: time="2025-11-01T00:21:23.652984700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.654616 containerd[1713]: time="2025-11-01T00:21:23.654571500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:23.654616 containerd[1713]: time="2025-11-01T00:21:23.654606700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 1 00:21:23.654748 containerd[1713]: time="2025-11-01T00:21:23.654627200Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 1 00:21:23.654841 containerd[1713]: time="2025-11-01T00:21:23.654816100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 1 00:21:23.654899 containerd[1713]: time="2025-11-01T00:21:23.654841800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.654960 containerd[1713]: time="2025-11-01T00:21:23.654932100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655004 containerd[1713]: time="2025-11-01T00:21:23.654957000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655185 containerd[1713]: time="2025-11-01T00:21:23.655159500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655185 containerd[1713]: time="2025-11-01T00:21:23.655180800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655276 containerd[1713]: time="2025-11-01T00:21:23.655200300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655276 containerd[1713]: time="2025-11-01T00:21:23.655224800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655362 containerd[1713]: time="2025-11-01T00:21:23.655339000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655622 containerd[1713]: time="2025-11-01T00:21:23.655598400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655759 containerd[1713]: time="2025-11-01T00:21:23.655737400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 1 00:21:23.655815 containerd[1713]: time="2025-11-01T00:21:23.655756800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 1 00:21:23.655877 containerd[1713]: time="2025-11-01T00:21:23.655855500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 1 00:21:23.655937 containerd[1713]: time="2025-11-01T00:21:23.655916600Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:21:23.670127 containerd[1713]: time="2025-11-01T00:21:23.670086300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 1 00:21:23.670248 containerd[1713]: time="2025-11-01T00:21:23.670149700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 1 00:21:23.670248 containerd[1713]: time="2025-11-01T00:21:23.670171400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 1 00:21:23.670248 containerd[1713]: time="2025-11-01T00:21:23.670190900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 1 00:21:23.670248 containerd[1713]: time="2025-11-01T00:21:23.670209400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 1 00:21:23.670411 containerd[1713]: time="2025-11-01T00:21:23.670386500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 1 00:21:23.670717 containerd[1713]: time="2025-11-01T00:21:23.670689800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 1 00:21:23.670849 containerd[1713]: time="2025-11-01T00:21:23.670827000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 1 00:21:23.670904 containerd[1713]: time="2025-11-01T00:21:23.670851900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 1 00:21:23.670904 containerd[1713]: time="2025-11-01T00:21:23.670870700Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 1 00:21:23.670904 containerd[1713]: time="2025-11-01T00:21:23.670891200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.670999 containerd[1713]: time="2025-11-01T00:21:23.670910200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.670999 containerd[1713]: time="2025-11-01T00:21:23.670943400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.670999 containerd[1713]: time="2025-11-01T00:21:23.670966200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.670999 containerd[1713]: time="2025-11-01T00:21:23.670987800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671006200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671023900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671041400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671067400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671097500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671135 containerd[1713]: time="2025-11-01T00:21:23.671116500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671137700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671155000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671173300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671190100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671208800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671226700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671255900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671274200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671292400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671310200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671337 containerd[1713]: time="2025-11-01T00:21:23.671332800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671363500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671384800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671401000Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671452800Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671477800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671493900Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671511700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671526100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671594000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671609700Z" level=info msg="NRI interface is disabled by configuration." Nov 1 00:21:23.671776 containerd[1713]: time="2025-11-01T00:21:23.671702300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 1 00:21:23.672283 containerd[1713]: time="2025-11-01T00:21:23.672093700Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 1 00:21:23.672283 containerd[1713]: time="2025-11-01T00:21:23.672176800Z" level=info msg="Connect containerd service" Nov 1 00:21:23.672283 containerd[1713]: time="2025-11-01T00:21:23.672224200Z" level=info msg="using legacy CRI server" Nov 1 00:21:23.672283 containerd[1713]: time="2025-11-01T00:21:23.672235600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:21:23.672675 containerd[1713]: time="2025-11-01T00:21:23.672361400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 1 00:21:23.673049 containerd[1713]: time="2025-11-01T00:21:23.673020600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:21:23.673477 containerd[1713]: time="2025-11-01T00:21:23.673194400Z" level=info msg="Start subscribing containerd event" Nov 1 00:21:23.673759 containerd[1713]: time="2025-11-01T00:21:23.673723800Z" level=info msg="Start recovering state" Nov 1 00:21:23.673881 containerd[1713]: time="2025-11-01T00:21:23.673852900Z" level=info msg="Start event monitor" Nov 1 00:21:23.673937 containerd[1713]: time="2025-11-01T00:21:23.673882800Z" level=info msg="Start snapshots syncer" Nov 1 00:21:23.673937 containerd[1713]: time="2025-11-01T00:21:23.673896800Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:21:23.673937 containerd[1713]: time="2025-11-01T00:21:23.673908200Z" level=info msg="Start streaming server" Nov 1 00:21:23.675113 containerd[1713]: time="2025-11-01T00:21:23.674747400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:21:23.675113 containerd[1713]: time="2025-11-01T00:21:23.674813400Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:21:23.676242 containerd[1713]: time="2025-11-01T00:21:23.676174500Z" level=info msg="containerd successfully booted in 0.048489s" Nov 1 00:21:23.676319 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:21:23.682065 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:21:23.685509 systemd[1]: Startup finished in 1.086s (firmware) + 28.744s (loader) + 1.018s (kernel) + 12.295s (initrd) + 20.123s (userspace) = 1min 3.268s. Nov 1 00:21:24.399448 login[1834]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Nov 1 00:21:24.441913 login[1836]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:21:24.451936 systemd-logind[1692]: New session 1 of user core. Nov 1 00:21:24.455232 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:21:24.460829 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:21:24.507304 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:21:24.513853 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:21:24.533671 (systemd)[1854]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:24.900236 systemd[1854]: Queued start job for default target default.target. Nov 1 00:21:24.906698 systemd[1854]: Created slice app.slice - User Application Slice. Nov 1 00:21:24.906739 systemd[1854]: Reached target paths.target - Paths. Nov 1 00:21:24.906758 systemd[1854]: Reached target timers.target - Timers. Nov 1 00:21:24.908039 systemd[1854]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:21:24.919105 systemd[1854]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:21:24.919240 systemd[1854]: Reached target sockets.target - Sockets. Nov 1 00:21:24.919259 systemd[1854]: Reached target basic.target - Basic System. Nov 1 00:21:24.919304 systemd[1854]: Reached target default.target - Main User Target. Nov 1 00:21:24.919340 systemd[1854]: Startup finished in 378ms. Nov 1 00:21:24.919662 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:21:24.925912 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:21:25.396266 waagent[1831]: 2025-11-01T00:21:25.396105Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 1 00:21:25.399816 login[1834]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 1 00:21:25.400251 waagent[1831]: 2025-11-01T00:21:25.400014Z INFO Daemon Daemon OS: flatcar 4081.3.6 Nov 1 00:21:25.404230 waagent[1831]: 2025-11-01T00:21:25.402997Z INFO Daemon Daemon Python: 3.11.9 Nov 1 00:21:25.404437 systemd-logind[1692]: New session 2 of user core. Nov 1 00:21:25.406149 waagent[1831]: 2025-11-01T00:21:25.405639Z INFO Daemon Daemon Run daemon Nov 1 00:21:25.411085 waagent[1831]: 2025-11-01T00:21:25.411032Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Nov 1 00:21:25.415719 waagent[1831]: 2025-11-01T00:21:25.415662Z INFO Daemon Daemon Using waagent for provisioning Nov 1 00:21:25.418517 waagent[1831]: 2025-11-01T00:21:25.418469Z INFO Daemon Daemon Activate resource disk Nov 1 00:21:25.420971 waagent[1831]: 2025-11-01T00:21:25.420918Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 1 00:21:25.425827 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:21:25.430135 waagent[1831]: 2025-11-01T00:21:25.430081Z INFO Daemon Daemon Found device: None Nov 1 00:21:25.432714 waagent[1831]: 2025-11-01T00:21:25.432663Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 1 00:21:25.437204 waagent[1831]: 2025-11-01T00:21:25.437154Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 1 00:21:25.444196 waagent[1831]: 2025-11-01T00:21:25.444144Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:21:25.447410 waagent[1831]: 2025-11-01T00:21:25.447058Z INFO Daemon Daemon Running default provisioning handler Nov 1 00:21:25.456690 waagent[1831]: 2025-11-01T00:21:25.456269Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 1 00:21:25.463330 waagent[1831]: 2025-11-01T00:21:25.463281Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 1 00:21:25.473094 waagent[1831]: 2025-11-01T00:21:25.463467Z INFO Daemon Daemon cloud-init is enabled: False Nov 1 00:21:25.473094 waagent[1831]: 2025-11-01T00:21:25.464359Z INFO Daemon Daemon Copying ovf-env.xml Nov 1 00:21:25.611305 waagent[1831]: 2025-11-01T00:21:25.607784Z INFO Daemon Daemon Successfully mounted dvd Nov 1 00:21:25.638157 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 1 00:21:25.640072 waagent[1831]: 2025-11-01T00:21:25.639954Z INFO Daemon Daemon Detect protocol endpoint Nov 1 00:21:25.646722 waagent[1831]: 2025-11-01T00:21:25.640359Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 1 00:21:25.646722 waagent[1831]: 2025-11-01T00:21:25.641488Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 1 00:21:25.646722 waagent[1831]: 2025-11-01T00:21:25.642364Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 1 00:21:25.646722 waagent[1831]: 2025-11-01T00:21:25.643407Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 1 00:21:25.646722 waagent[1831]: 2025-11-01T00:21:25.643760Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 1 00:21:25.670450 waagent[1831]: 2025-11-01T00:21:25.670379Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 1 00:21:25.678852 waagent[1831]: 2025-11-01T00:21:25.670964Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 1 00:21:25.678852 waagent[1831]: 2025-11-01T00:21:25.671716Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 1 00:21:25.763423 waagent[1831]: 2025-11-01T00:21:25.763317Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 1 00:21:25.767278 waagent[1831]: 2025-11-01T00:21:25.767152Z INFO Daemon Daemon Forcing an update of the goal state. Nov 1 00:21:25.770632 waagent[1831]: 2025-11-01T00:21:25.770574Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:21:25.783154 waagent[1831]: 2025-11-01T00:21:25.783096Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.783776Z INFO Daemon Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.784411Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e5a2000a-0620-4a1c-9e1e-dd825d038773 eTag: 13544839623964454484 source: Fabric] Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.785579Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.786772Z INFO Daemon Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.787312Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:21:25.802674 waagent[1831]: 2025-11-01T00:21:25.792137Z INFO Daemon Daemon Downloading artifacts profile blob Nov 1 00:21:25.866330 waagent[1831]: 2025-11-01T00:21:25.866245Z INFO Daemon Downloaded certificate {'thumbprint': 'C9B84E544F176CA341444B3C3C2D63E513A03B73', 'hasPrivateKey': True} Nov 1 00:21:25.871815 waagent[1831]: 2025-11-01T00:21:25.871747Z INFO Daemon Fetch goal state completed Nov 1 00:21:25.879426 waagent[1831]: 2025-11-01T00:21:25.879382Z INFO Daemon Daemon Starting provisioning Nov 1 00:21:25.886515 waagent[1831]: 2025-11-01T00:21:25.879591Z INFO Daemon Daemon Handle ovf-env.xml. Nov 1 00:21:25.886515 waagent[1831]: 2025-11-01T00:21:25.880601Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-534d15dd10] Nov 1 00:21:25.911037 waagent[1831]: 2025-11-01T00:21:25.910957Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-534d15dd10] Nov 1 00:21:25.920376 waagent[1831]: 2025-11-01T00:21:25.911440Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 1 00:21:25.920376 waagent[1831]: 2025-11-01T00:21:25.912692Z INFO Daemon Daemon Primary interface is [eth0] Nov 1 00:21:25.984376 systemd-networkd[1344]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 1 00:21:25.984386 systemd-networkd[1344]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:25.984434 systemd-networkd[1344]: eth0: DHCP lease lost Nov 1 00:21:25.986560 waagent[1831]: 2025-11-01T00:21:25.985844Z INFO Daemon Daemon Create user account if not exists Nov 1 00:21:25.989237 systemd-networkd[1344]: eth0: DHCPv6 lease lost Nov 1 00:21:25.989642 waagent[1831]: 2025-11-01T00:21:25.989547Z INFO Daemon Daemon User core already exists, skip useradd Nov 1 00:21:25.992813 waagent[1831]: 2025-11-01T00:21:25.992737Z INFO Daemon Daemon Configure sudoer Nov 1 00:21:25.995427 waagent[1831]: 2025-11-01T00:21:25.995365Z INFO Daemon Daemon Configure sshd Nov 1 00:21:25.997909 waagent[1831]: 2025-11-01T00:21:25.997850Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 1 00:21:26.004194 waagent[1831]: 2025-11-01T00:21:26.004139Z INFO Daemon Daemon Deploy ssh public key. Nov 1 00:21:26.029588 systemd-networkd[1344]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 1 00:21:27.719475 waagent[1831]: 2025-11-01T00:21:27.719405Z INFO Daemon Daemon Provisioning complete Nov 1 00:21:27.734501 waagent[1831]: 2025-11-01T00:21:27.734421Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 1 00:21:27.737473 waagent[1831]: 2025-11-01T00:21:27.737401Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 1 00:21:27.742346 waagent[1831]: 2025-11-01T00:21:27.742285Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 1 00:21:27.866866 waagent[1903]: 2025-11-01T00:21:27.866751Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 1 00:21:27.867296 waagent[1903]: 2025-11-01T00:21:27.866926Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Nov 1 00:21:27.867296 waagent[1903]: 2025-11-01T00:21:27.867010Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 1 00:21:28.086922 waagent[1903]: 2025-11-01T00:21:28.086743Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 1 00:21:28.087132 waagent[1903]: 2025-11-01T00:21:28.087070Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:21:28.087250 waagent[1903]: 2025-11-01T00:21:28.087199Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:21:28.096168 waagent[1903]: 2025-11-01T00:21:28.096078Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 1 00:21:28.102328 waagent[1903]: 2025-11-01T00:21:28.102264Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Nov 1 00:21:28.102853 waagent[1903]: 2025-11-01T00:21:28.102793Z INFO ExtHandler Nov 1 00:21:28.102956 waagent[1903]: 2025-11-01T00:21:28.102893Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3a06adb2-cb45-4c15-821b-04325a31c5bd eTag: 13544839623964454484 source: Fabric] Nov 1 00:21:28.103261 waagent[1903]: 2025-11-01T00:21:28.103210Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 1 00:21:28.103858 waagent[1903]: 2025-11-01T00:21:28.103800Z INFO ExtHandler Nov 1 00:21:28.103940 waagent[1903]: 2025-11-01T00:21:28.103887Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 1 00:21:28.107849 waagent[1903]: 2025-11-01T00:21:28.107804Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 1 00:21:28.170208 waagent[1903]: 2025-11-01T00:21:28.170114Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C9B84E544F176CA341444B3C3C2D63E513A03B73', 'hasPrivateKey': True} Nov 1 00:21:28.170763 waagent[1903]: 2025-11-01T00:21:28.170703Z INFO ExtHandler Fetch goal state completed Nov 1 00:21:28.184160 waagent[1903]: 2025-11-01T00:21:28.184084Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1903 Nov 1 00:21:28.184327 waagent[1903]: 2025-11-01T00:21:28.184273Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 1 00:21:28.185921 waagent[1903]: 2025-11-01T00:21:28.185860Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Nov 1 00:21:28.186287 waagent[1903]: 2025-11-01T00:21:28.186236Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 1 00:21:28.258769 waagent[1903]: 2025-11-01T00:21:28.258716Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 1 00:21:28.259009 waagent[1903]: 2025-11-01T00:21:28.258956Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 1 00:21:28.265496 waagent[1903]: 2025-11-01T00:21:28.265447Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 1 00:21:28.272169 systemd[1]: Reloading requested from client PID 1916 ('systemctl') (unit waagent.service)... Nov 1 00:21:28.272185 systemd[1]: Reloading... Nov 1 00:21:28.373570 zram_generator::config[1954]: No configuration found. Nov 1 00:21:28.483974 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:28.565370 systemd[1]: Reloading finished in 292 ms. Nov 1 00:21:28.595553 waagent[1903]: 2025-11-01T00:21:28.590505Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 1 00:21:28.599431 systemd[1]: Reloading requested from client PID 2007 ('systemctl') (unit waagent.service)... Nov 1 00:21:28.599453 systemd[1]: Reloading... Nov 1 00:21:28.696571 zram_generator::config[2044]: No configuration found. Nov 1 00:21:28.813905 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:21:28.894445 systemd[1]: Reloading finished in 294 ms. Nov 1 00:21:28.925563 waagent[1903]: 2025-11-01T00:21:28.921114Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 1 00:21:28.925563 waagent[1903]: 2025-11-01T00:21:28.921341Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 1 00:21:29.392464 waagent[1903]: 2025-11-01T00:21:29.392360Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 1 00:21:29.393157 waagent[1903]: 2025-11-01T00:21:29.393084Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 1 00:21:29.394126 waagent[1903]: 2025-11-01T00:21:29.394053Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 1 00:21:29.394320 waagent[1903]: 2025-11-01T00:21:29.394224Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:21:29.394677 waagent[1903]: 2025-11-01T00:21:29.394619Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:21:29.394970 waagent[1903]: 2025-11-01T00:21:29.394919Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 1 00:21:29.395303 waagent[1903]: 2025-11-01T00:21:29.395240Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 1 00:21:29.395711 waagent[1903]: 2025-11-01T00:21:29.395654Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 1 00:21:29.396211 waagent[1903]: 2025-11-01T00:21:29.396075Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 1 00:21:29.396211 waagent[1903]: 2025-11-01T00:21:29.396142Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 1 00:21:29.396211 waagent[1903]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 1 00:21:29.396211 waagent[1903]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 1 00:21:29.396211 waagent[1903]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 1 00:21:29.396211 waagent[1903]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:21:29.396211 waagent[1903]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:21:29.396211 waagent[1903]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 1 00:21:29.396815 waagent[1903]: 2025-11-01T00:21:29.396739Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 1 00:21:29.396961 waagent[1903]: 2025-11-01T00:21:29.396888Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 1 00:21:29.397203 waagent[1903]: 2025-11-01T00:21:29.397141Z INFO EnvHandler ExtHandler Configure routes Nov 1 00:21:29.397317 waagent[1903]: 2025-11-01T00:21:29.397272Z INFO EnvHandler ExtHandler Gateway:None Nov 1 00:21:29.397416 waagent[1903]: 2025-11-01T00:21:29.397369Z INFO EnvHandler ExtHandler Routes:None Nov 1 00:21:29.398565 waagent[1903]: 2025-11-01T00:21:29.398483Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 1 00:21:29.398830 waagent[1903]: 2025-11-01T00:21:29.398775Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 1 00:21:29.398985 waagent[1903]: 2025-11-01T00:21:29.398933Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 1 00:21:29.405316 waagent[1903]: 2025-11-01T00:21:29.405277Z INFO ExtHandler ExtHandler Nov 1 00:21:29.405718 waagent[1903]: 2025-11-01T00:21:29.405661Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 98767885-499a-4de0-ae80-3c0dc2b63660 correlation 6abf86f9-e63e-425f-9e03-b063ecbaf11c created: 2025-11-01T00:20:09.424118Z] Nov 1 00:21:29.406959 waagent[1903]: 2025-11-01T00:21:29.406914Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 1 00:21:29.407457 waagent[1903]: 2025-11-01T00:21:29.407414Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Nov 1 00:21:29.491576 waagent[1903]: 2025-11-01T00:21:29.491404Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A9F94886-538F-4277-B7C7-D64632C1A5D7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 1 00:21:29.518327 waagent[1903]: 2025-11-01T00:21:29.518244Z INFO MonitorHandler ExtHandler Network interfaces: Nov 1 00:21:29.518327 waagent[1903]: Executing ['ip', '-a', '-o', 'link']: Nov 1 00:21:29.518327 waagent[1903]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 1 00:21:29.518327 waagent[1903]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:0a:6d brd ff:ff:ff:ff:ff:ff Nov 1 00:21:29.518327 waagent[1903]: 3: enP50013s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e1:0a:6d brd ff:ff:ff:ff:ff:ff\ altname enP50013p0s2 Nov 1 00:21:29.518327 waagent[1903]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 1 00:21:29.518327 waagent[1903]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 1 00:21:29.518327 waagent[1903]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 1 00:21:29.518327 waagent[1903]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 1 00:21:29.518327 waagent[1903]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 1 00:21:29.518327 waagent[1903]: 2: eth0 inet6 fe80::6245:bdff:fee1:a6d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 1 00:21:29.636399 waagent[1903]: 2025-11-01T00:21:29.636317Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 1 00:21:29.636399 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.636399 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.636399 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.636399 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.636399 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.636399 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.636399 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:21:29.636399 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:21:29.636399 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:21:29.640018 waagent[1903]: 2025-11-01T00:21:29.639952Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 1 00:21:29.640018 waagent[1903]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.640018 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.640018 waagent[1903]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.640018 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.640018 waagent[1903]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 1 00:21:29.640018 waagent[1903]: pkts bytes target prot opt in out source destination Nov 1 00:21:29.640018 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 1 00:21:29.640018 waagent[1903]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 1 00:21:29.640018 waagent[1903]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 1 00:21:29.640401 waagent[1903]: 2025-11-01T00:21:29.640277Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 1 00:21:32.501919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:21:32.514763 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:32.623259 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:32.629894 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:33.387399 kubelet[2140]: E1101 00:21:33.387339 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:33.390889 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:33.391103 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:43.641760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:21:43.648790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:43.749134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:43.753686 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:44.407858 kubelet[2155]: E1101 00:21:44.407803 2155 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:44.410189 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:44.410395 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:44.569174 chronyd[1694]: Selected source PHC0 Nov 1 00:21:54.513162 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 1 00:21:54.519754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:54.891060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:54.900851 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:54.937610 kubelet[2170]: E1101 00:21:54.937517 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:54.940069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:54.940285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:55.881919 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:21:55.891835 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:39122.service - OpenSSH per-connection server daemon (10.200.16.10:39122). Nov 1 00:21:56.562310 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 39122 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:21:56.564091 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:56.569616 systemd-logind[1692]: New session 3 of user core. Nov 1 00:21:56.575822 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:21:57.111483 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:39138.service - OpenSSH per-connection server daemon (10.200.16.10:39138). Nov 1 00:21:57.739348 sshd[2183]: Accepted publickey for core from 10.200.16.10 port 39138 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:21:57.741115 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:57.746133 systemd-logind[1692]: New session 4 of user core. Nov 1 00:21:57.755720 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:21:58.185587 sshd[2183]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:58.189396 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:39138.service: Deactivated successfully. Nov 1 00:21:58.191190 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:21:58.191879 systemd-logind[1692]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:21:58.192774 systemd-logind[1692]: Removed session 4. Nov 1 00:21:58.295374 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:39142.service - OpenSSH per-connection server daemon (10.200.16.10:39142). Nov 1 00:21:58.921559 sshd[2190]: Accepted publickey for core from 10.200.16.10 port 39142 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:21:58.924499 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:58.929565 systemd-logind[1692]: New session 5 of user core. Nov 1 00:21:58.934726 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:21:59.361843 sshd[2190]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:59.365924 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:39142.service: Deactivated successfully. Nov 1 00:21:59.367709 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:21:59.368366 systemd-logind[1692]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:21:59.369285 systemd-logind[1692]: Removed session 5. Nov 1 00:21:59.472633 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:39146.service - OpenSSH per-connection server daemon (10.200.16.10:39146). Nov 1 00:22:00.100433 sshd[2197]: Accepted publickey for core from 10.200.16.10 port 39146 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:22:00.102061 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:00.106934 systemd-logind[1692]: New session 6 of user core. Nov 1 00:22:00.112722 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:22:00.545862 sshd[2197]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:00.549596 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:39146.service: Deactivated successfully. Nov 1 00:22:00.551371 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:00.552091 systemd-logind[1692]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:00.552959 systemd-logind[1692]: Removed session 6. Nov 1 00:22:00.655486 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:42026.service - OpenSSH per-connection server daemon (10.200.16.10:42026). Nov 1 00:22:00.887139 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 1 00:22:01.284302 sshd[2204]: Accepted publickey for core from 10.200.16.10 port 42026 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:22:01.286083 sshd[2204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:01.291923 systemd-logind[1692]: New session 7 of user core. Nov 1 00:22:01.298726 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:22:01.830953 sudo[2207]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:01.831384 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:01.860913 sudo[2207]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:01.963560 sshd[2204]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:01.968247 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:42026.service: Deactivated successfully. Nov 1 00:22:01.970005 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:22:01.970798 systemd-logind[1692]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:22:01.971698 systemd-logind[1692]: Removed session 7. Nov 1 00:22:02.078626 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:42030.service - OpenSSH per-connection server daemon (10.200.16.10:42030). Nov 1 00:22:02.709122 sshd[2212]: Accepted publickey for core from 10.200.16.10 port 42030 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:22:02.710913 sshd[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:02.716521 systemd-logind[1692]: New session 8 of user core. Nov 1 00:22:02.721728 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:22:03.056313 sudo[2216]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:03.056704 sudo[2216]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:03.059902 sudo[2216]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:03.064853 sudo[2215]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 1 00:22:03.065204 sudo[2215]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:03.077177 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:03.079904 auditctl[2219]: No rules Nov 1 00:22:03.080273 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:03.080490 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:03.083354 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 1 00:22:03.109382 augenrules[2237]: No rules Nov 1 00:22:03.110819 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 1 00:22:03.112235 sudo[2215]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:03.223322 sshd[2212]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:03.226753 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:42030.service: Deactivated successfully. Nov 1 00:22:03.229042 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:22:03.230436 systemd-logind[1692]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:22:03.231395 systemd-logind[1692]: Removed session 8. Nov 1 00:22:03.334025 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:42042.service - OpenSSH per-connection server daemon (10.200.16.10:42042). Nov 1 00:22:03.961634 sshd[2245]: Accepted publickey for core from 10.200.16.10 port 42042 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:22:03.964152 sshd[2245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.969985 systemd-logind[1692]: New session 9 of user core. Nov 1 00:22:03.975700 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:22:04.307670 sudo[2248]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:04.308036 sudo[2248]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:05.012868 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 1 00:22:05.017799 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:05.807216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:05.812245 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:05.849366 kubelet[2270]: E1101 00:22:05.849306 2270 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:05.852563 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:05.852738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:06.098839 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:22:06.100298 (dockerd)[2278]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:22:07.516900 dockerd[2278]: time="2025-11-01T00:22:07.516836271Z" level=info msg="Starting up" Nov 1 00:22:07.910461 dockerd[2278]: time="2025-11-01T00:22:07.910323859Z" level=info msg="Loading containers: start." Nov 1 00:22:08.112561 kernel: Initializing XFRM netlink socket Nov 1 00:22:08.249417 update_engine[1695]: I20251101 00:22:08.249224 1695 update_attempter.cc:509] Updating boot flags... Nov 1 00:22:08.319989 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2357) Nov 1 00:22:08.400776 systemd-networkd[1344]: docker0: Link UP Nov 1 00:22:08.429709 dockerd[2278]: time="2025-11-01T00:22:08.429666984Z" level=info msg="Loading containers: done." Nov 1 00:22:08.456575 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 34 scanned by (udev-worker) (2359) Nov 1 00:22:08.504614 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1332969085-merged.mount: Deactivated successfully. Nov 1 00:22:08.513314 dockerd[2278]: time="2025-11-01T00:22:08.513137636Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:08.514306 dockerd[2278]: time="2025-11-01T00:22:08.514250248Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 1 00:22:08.514424 dockerd[2278]: time="2025-11-01T00:22:08.514402950Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:08.577160 dockerd[2278]: time="2025-11-01T00:22:08.577092965Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:08.577699 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:22:09.379494 containerd[1713]: time="2025-11-01T00:22:09.379440045Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 1 00:22:10.125475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount528665436.mount: Deactivated successfully. Nov 1 00:22:11.783461 containerd[1713]: time="2025-11-01T00:22:11.782610015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.785212 containerd[1713]: time="2025-11-01T00:22:11.785161249Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065400" Nov 1 00:22:11.787977 containerd[1713]: time="2025-11-01T00:22:11.787928386Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.795083 containerd[1713]: time="2025-11-01T00:22:11.793854566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.795083 containerd[1713]: time="2025-11-01T00:22:11.794892680Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.415409033s" Nov 1 00:22:11.795083 containerd[1713]: time="2025-11-01T00:22:11.794931080Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 1 00:22:11.796034 containerd[1713]: time="2025-11-01T00:22:11.796009594Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 1 00:22:13.198791 containerd[1713]: time="2025-11-01T00:22:13.198715772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.202318 containerd[1713]: time="2025-11-01T00:22:13.202075817Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159765" Nov 1 00:22:13.205305 containerd[1713]: time="2025-11-01T00:22:13.205244659Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.211267 containerd[1713]: time="2025-11-01T00:22:13.210020023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.211267 containerd[1713]: time="2025-11-01T00:22:13.211117138Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.415001641s" Nov 1 00:22:13.211267 containerd[1713]: time="2025-11-01T00:22:13.211153438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 1 00:22:13.212093 containerd[1713]: time="2025-11-01T00:22:13.212059750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 1 00:22:14.334352 containerd[1713]: time="2025-11-01T00:22:14.334298373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.339177 containerd[1713]: time="2025-11-01T00:22:14.339112837Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725101" Nov 1 00:22:14.343814 containerd[1713]: time="2025-11-01T00:22:14.343768500Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.354345 containerd[1713]: time="2025-11-01T00:22:14.354285841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:14.355453 containerd[1713]: time="2025-11-01T00:22:14.355298654Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.143106002s" Nov 1 00:22:14.355453 containerd[1713]: time="2025-11-01T00:22:14.355340055Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 1 00:22:14.356303 containerd[1713]: time="2025-11-01T00:22:14.355993563Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 1 00:22:15.589715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738712850.mount: Deactivated successfully. Nov 1 00:22:15.993454 containerd[1713]: time="2025-11-01T00:22:15.993316981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:15.996511 containerd[1713]: time="2025-11-01T00:22:15.996330822Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964707" Nov 1 00:22:16.000519 containerd[1713]: time="2025-11-01T00:22:15.999820568Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:16.006034 containerd[1713]: time="2025-11-01T00:22:16.005997851Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:16.006610 containerd[1713]: time="2025-11-01T00:22:16.006574259Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.650545095s" Nov 1 00:22:16.006679 containerd[1713]: time="2025-11-01T00:22:16.006617159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 1 00:22:16.007429 containerd[1713]: time="2025-11-01T00:22:16.007405770Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 1 00:22:16.012781 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 1 00:22:16.021809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:16.123971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:16.128338 (kubelet)[2561]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:16.164917 kubelet[2561]: E1101 00:22:16.164866 2561 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:16.167149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:16.167358 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:17.172776 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3679384912.mount: Deactivated successfully. Nov 1 00:22:18.567085 containerd[1713]: time="2025-11-01T00:22:18.567035269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.574096 containerd[1713]: time="2025-11-01T00:22:18.574029526Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Nov 1 00:22:18.577164 containerd[1713]: time="2025-11-01T00:22:18.577094695Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.581814 containerd[1713]: time="2025-11-01T00:22:18.581764700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.583015 containerd[1713]: time="2025-11-01T00:22:18.582830424Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.575317552s" Nov 1 00:22:18.583015 containerd[1713]: time="2025-11-01T00:22:18.582868424Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 1 00:22:18.583739 containerd[1713]: time="2025-11-01T00:22:18.583511739Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 1 00:22:19.170313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365753074.mount: Deactivated successfully. Nov 1 00:22:19.195667 containerd[1713]: time="2025-11-01T00:22:19.195506395Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:19.199341 containerd[1713]: time="2025-11-01T00:22:19.199195378Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Nov 1 00:22:19.202802 containerd[1713]: time="2025-11-01T00:22:19.202650155Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:19.206892 containerd[1713]: time="2025-11-01T00:22:19.206838150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:19.207766 containerd[1713]: time="2025-11-01T00:22:19.207575066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 623.853723ms" Nov 1 00:22:19.207766 containerd[1713]: time="2025-11-01T00:22:19.207617267Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 1 00:22:19.208116 containerd[1713]: time="2025-11-01T00:22:19.208087978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 1 00:22:22.858753 containerd[1713]: time="2025-11-01T00:22:22.858701123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.862051 containerd[1713]: time="2025-11-01T00:22:22.861982965Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514601" Nov 1 00:22:22.865319 containerd[1713]: time="2025-11-01T00:22:22.865276307Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.870965 containerd[1713]: time="2025-11-01T00:22:22.870909280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.873564 containerd[1713]: time="2025-11-01T00:22:22.872068395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.663942317s" Nov 1 00:22:22.873564 containerd[1713]: time="2025-11-01T00:22:22.872109695Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 1 00:22:25.939064 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:25.945819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:25.973875 systemd[1]: Reloading requested from client PID 2693 ('systemctl') (unit session-9.scope)... Nov 1 00:22:25.973891 systemd[1]: Reloading... Nov 1 00:22:26.086706 zram_generator::config[2739]: No configuration found. Nov 1 00:22:26.195475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:26.276338 systemd[1]: Reloading finished in 302 ms. Nov 1 00:22:26.586098 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:22:26.586255 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:22:26.587175 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:26.592989 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:27.525221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:27.535876 (kubelet)[2800]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:27.573561 kubelet[2800]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:27.573561 kubelet[2800]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:27.573561 kubelet[2800]: I1101 00:22:27.573361 2800 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:28.204300 kubelet[2800]: I1101 00:22:28.204261 2800 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:22:28.204300 kubelet[2800]: I1101 00:22:28.204289 2800 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:28.204490 kubelet[2800]: I1101 00:22:28.204318 2800 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:22:28.204490 kubelet[2800]: I1101 00:22:28.204326 2800 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:28.204655 kubelet[2800]: I1101 00:22:28.204633 2800 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:22:28.237119 kubelet[2800]: E1101 00:22:28.237069 2800 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:22:28.238783 kubelet[2800]: I1101 00:22:28.238611 2800 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:28.244112 kubelet[2800]: E1101 00:22:28.243568 2800 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:28.244112 kubelet[2800]: I1101 00:22:28.243647 2800 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:28.247720 kubelet[2800]: I1101 00:22:28.247510 2800 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:22:28.248905 kubelet[2800]: I1101 00:22:28.248425 2800 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:28.248905 kubelet[2800]: I1101 00:22:28.248462 2800 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-534d15dd10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:28.248905 kubelet[2800]: I1101 00:22:28.248681 2800 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:28.248905 kubelet[2800]: I1101 00:22:28.248695 2800 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:22:28.249144 kubelet[2800]: I1101 00:22:28.248803 2800 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:22:28.256877 kubelet[2800]: I1101 00:22:28.256245 2800 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:28.262562 kubelet[2800]: I1101 00:22:28.261781 2800 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:22:28.262562 kubelet[2800]: I1101 00:22:28.261817 2800 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:28.262562 kubelet[2800]: I1101 00:22:28.261849 2800 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:22:28.262562 kubelet[2800]: I1101 00:22:28.261873 2800 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:28.265700 kubelet[2800]: I1101 00:22:28.264620 2800 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:28.265700 kubelet[2800]: I1101 00:22:28.265209 2800 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:22:28.265700 kubelet[2800]: I1101 00:22:28.265246 2800 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:22:28.265700 kubelet[2800]: W1101 00:22:28.265297 2800 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:28.268441 kubelet[2800]: I1101 00:22:28.268425 2800 server.go:1262] "Started kubelet" Nov 1 00:22:28.268757 kubelet[2800]: E1101 00:22:28.268733 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:22:28.268970 kubelet[2800]: E1101 00:22:28.268949 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-534d15dd10&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:22:28.270575 kubelet[2800]: I1101 00:22:28.270530 2800 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:28.271602 kubelet[2800]: I1101 00:22:28.271579 2800 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:22:28.273229 kubelet[2800]: I1101 00:22:28.272547 2800 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:28.273229 kubelet[2800]: I1101 00:22:28.272594 2800 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:22:28.273229 kubelet[2800]: I1101 00:22:28.272912 2800 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:28.274364 kubelet[2800]: E1101 00:22:28.273145 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-534d15dd10.1873ba2539361bb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-534d15dd10,UID:ci-4081.3.6-n-534d15dd10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-534d15dd10,},FirstTimestamp:2025-11-01 00:22:28.268399538 +0000 UTC m=+0.729390880,LastTimestamp:2025-11-01 00:22:28.268399538 +0000 UTC m=+0.729390880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-534d15dd10,}" Nov 1 00:22:28.275497 kubelet[2800]: I1101 00:22:28.275471 2800 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:28.277615 kubelet[2800]: I1101 00:22:28.277331 2800 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:28.280706 kubelet[2800]: E1101 00:22:28.280686 2800 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-534d15dd10\" not found" Nov 1 00:22:28.280831 kubelet[2800]: I1101 00:22:28.280819 2800 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:22:28.281106 kubelet[2800]: I1101 00:22:28.281088 2800 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:22:28.281241 kubelet[2800]: I1101 00:22:28.281229 2800 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:22:28.281711 kubelet[2800]: E1101 00:22:28.281688 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:22:28.282409 kubelet[2800]: E1101 00:22:28.282389 2800 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:28.282746 kubelet[2800]: I1101 00:22:28.282726 2800 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:22:28.282920 kubelet[2800]: I1101 00:22:28.282900 2800 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:28.284071 kubelet[2800]: E1101 00:22:28.284028 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-534d15dd10?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Nov 1 00:22:28.284297 kubelet[2800]: I1101 00:22:28.284278 2800 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:22:28.331083 kubelet[2800]: I1101 00:22:28.331045 2800 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:28.331083 kubelet[2800]: I1101 00:22:28.331063 2800 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:28.331083 kubelet[2800]: I1101 00:22:28.331084 2800 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:28.337472 kubelet[2800]: I1101 00:22:28.336810 2800 policy_none.go:49] "None policy: Start" Nov 1 00:22:28.337472 kubelet[2800]: I1101 00:22:28.336835 2800 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:22:28.337472 kubelet[2800]: I1101 00:22:28.336849 2800 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:22:28.347351 kubelet[2800]: I1101 00:22:28.347326 2800 policy_none.go:47] "Start" Nov 1 00:22:28.351353 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:22:28.357406 kubelet[2800]: I1101 00:22:28.357374 2800 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:28.359363 kubelet[2800]: I1101 00:22:28.359237 2800 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:28.359363 kubelet[2800]: I1101 00:22:28.359270 2800 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:22:28.360258 kubelet[2800]: I1101 00:22:28.359702 2800 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:22:28.360258 kubelet[2800]: E1101 00:22:28.359761 2800 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:28.360258 kubelet[2800]: E1101 00:22:28.360218 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:22:28.367983 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:22:28.371286 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:22:28.381589 kubelet[2800]: E1101 00:22:28.381523 2800 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-534d15dd10\" not found" Nov 1 00:22:28.382857 kubelet[2800]: E1101 00:22:28.382297 2800 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:22:28.382857 kubelet[2800]: I1101 00:22:28.382513 2800 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:28.382857 kubelet[2800]: I1101 00:22:28.382527 2800 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:28.382857 kubelet[2800]: I1101 00:22:28.382807 2800 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:28.385501 kubelet[2800]: E1101 00:22:28.385479 2800 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:28.385599 kubelet[2800]: E1101 00:22:28.385525 2800 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-534d15dd10\" not found" Nov 1 00:22:28.473798 systemd[1]: Created slice kubepods-burstable-pod3638a92b8513c4b564ee4419848f47ee.slice - libcontainer container kubepods-burstable-pod3638a92b8513c4b564ee4419848f47ee.slice. Nov 1 00:22:28.479365 kubelet[2800]: E1101 00:22:28.479338 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482713 kubelet[2800]: I1101 00:22:28.482688 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482808 kubelet[2800]: I1101 00:22:28.482726 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3638a92b8513c4b564ee4419848f47ee-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-534d15dd10\" (UID: \"3638a92b8513c4b564ee4419848f47ee\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482808 kubelet[2800]: I1101 00:22:28.482766 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482808 kubelet[2800]: I1101 00:22:28.482788 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482942 kubelet[2800]: I1101 00:22:28.482810 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482942 kubelet[2800]: I1101 00:22:28.482856 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482942 kubelet[2800]: I1101 00:22:28.482878 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.482942 kubelet[2800]: I1101 00:22:28.482927 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.483099 kubelet[2800]: I1101 00:22:28.482951 2800 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.484466 kubelet[2800]: E1101 00:22:28.484440 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-534d15dd10?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Nov 1 00:22:28.484496 systemd[1]: Created slice kubepods-burstable-pod15930ecc3b5819895d44714c6f97ffcd.slice - libcontainer container kubepods-burstable-pod15930ecc3b5819895d44714c6f97ffcd.slice. Nov 1 00:22:28.486998 kubelet[2800]: E1101 00:22:28.486694 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.486998 kubelet[2800]: I1101 00:22:28.486717 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.487112 kubelet[2800]: E1101 00:22:28.487089 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.497626 systemd[1]: Created slice kubepods-burstable-podf35f8d54f7ed8b8be0e1f10d9e6e99ca.slice - libcontainer container kubepods-burstable-podf35f8d54f7ed8b8be0e1f10d9e6e99ca.slice. Nov 1 00:22:28.499386 kubelet[2800]: E1101 00:22:28.499367 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.639595 kubelet[2800]: E1101 00:22:28.639442 2800 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-534d15dd10.1873ba2539361bb2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-534d15dd10,UID:ci-4081.3.6-n-534d15dd10,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-534d15dd10,},FirstTimestamp:2025-11-01 00:22:28.268399538 +0000 UTC m=+0.729390880,LastTimestamp:2025-11-01 00:22:28.268399538 +0000 UTC m=+0.729390880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-534d15dd10,}" Nov 1 00:22:28.689906 kubelet[2800]: I1101 00:22:28.689864 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.690273 kubelet[2800]: E1101 00:22:28.690231 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:28.788473 containerd[1713]: time="2025-11-01T00:22:28.788429212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-534d15dd10,Uid:3638a92b8513c4b564ee4419848f47ee,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:28.794369 containerd[1713]: time="2025-11-01T00:22:28.794328980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-534d15dd10,Uid:15930ecc3b5819895d44714c6f97ffcd,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:28.807411 containerd[1713]: time="2025-11-01T00:22:28.807377930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-534d15dd10,Uid:f35f8d54f7ed8b8be0e1f10d9e6e99ca,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:28.885770 kubelet[2800]: E1101 00:22:28.885715 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-534d15dd10?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Nov 1 00:22:29.092214 kubelet[2800]: I1101 00:22:29.092114 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:29.092505 kubelet[2800]: E1101 00:22:29.092475 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:29.309465 kubelet[2800]: E1101 00:22:29.309420 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-534d15dd10&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 1 00:22:29.367224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2574927509.mount: Deactivated successfully. Nov 1 00:22:29.391726 containerd[1713]: time="2025-11-01T00:22:29.391657242Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:29.395001 containerd[1713]: time="2025-11-01T00:22:29.394553675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 1 00:22:29.397750 containerd[1713]: time="2025-11-01T00:22:29.397708711Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:29.401324 containerd[1713]: time="2025-11-01T00:22:29.401285552Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:29.408425 containerd[1713]: time="2025-11-01T00:22:29.407918429Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:29.419423 containerd[1713]: time="2025-11-01T00:22:29.419381160Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:29.427297 containerd[1713]: time="2025-11-01T00:22:29.427261951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:29.428071 containerd[1713]: time="2025-11-01T00:22:29.428038360Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 639.526047ms" Nov 1 00:22:29.430191 containerd[1713]: time="2025-11-01T00:22:29.430110283Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 1 00:22:29.436389 containerd[1713]: time="2025-11-01T00:22:29.436353855Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 641.951274ms" Nov 1 00:22:29.473476 containerd[1713]: time="2025-11-01T00:22:29.473422681Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 665.978751ms" Nov 1 00:22:29.519264 kubelet[2800]: E1101 00:22:29.519226 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 1 00:22:29.626902 kubelet[2800]: E1101 00:22:29.626577 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.40:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 1 00:22:29.687092 kubelet[2800]: E1101 00:22:29.686998 2800 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-534d15dd10?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:29.886013 kubelet[2800]: E1101 00:22:29.885890 2800 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 1 00:22:29.895375 kubelet[2800]: I1101 00:22:29.894965 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:29.895375 kubelet[2800]: E1101 00:22:29.895337 2800 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:30.115106 containerd[1713]: time="2025-11-01T00:22:30.114919050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.115106 containerd[1713]: time="2025-11-01T00:22:30.115004951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.115106 containerd[1713]: time="2025-11-01T00:22:30.115024051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.115639 containerd[1713]: time="2025-11-01T00:22:30.115434456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.117776 containerd[1713]: time="2025-11-01T00:22:30.117559381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.117776 containerd[1713]: time="2025-11-01T00:22:30.117617981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.118734 containerd[1713]: time="2025-11-01T00:22:30.117678782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.119195 containerd[1713]: time="2025-11-01T00:22:30.119071698Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.128742 containerd[1713]: time="2025-11-01T00:22:30.128369105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:30.128742 containerd[1713]: time="2025-11-01T00:22:30.128433006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:30.128742 containerd[1713]: time="2025-11-01T00:22:30.128455806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.128742 containerd[1713]: time="2025-11-01T00:22:30.128655408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:30.165750 systemd[1]: Started cri-containerd-acfe298d0a9e1d96de53c0df0380416f23165b9f2816bb571d2e7c9ce9b9e3f1.scope - libcontainer container acfe298d0a9e1d96de53c0df0380416f23165b9f2816bb571d2e7c9ce9b9e3f1. Nov 1 00:22:30.181683 systemd[1]: Started cri-containerd-08bea2dbeb92e02e8c40de7b7d5e2dc1b9fe5337cb86ce854e5e8a6c70a56200.scope - libcontainer container 08bea2dbeb92e02e8c40de7b7d5e2dc1b9fe5337cb86ce854e5e8a6c70a56200. Nov 1 00:22:30.189710 systemd[1]: Started cri-containerd-5821ecba8cb1b32f4ea5792239e241df78c85ec7d1011a4dd3fe81ff6682abe8.scope - libcontainer container 5821ecba8cb1b32f4ea5792239e241df78c85ec7d1011a4dd3fe81ff6682abe8. Nov 1 00:22:30.256900 containerd[1713]: time="2025-11-01T00:22:30.256829280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-534d15dd10,Uid:15930ecc3b5819895d44714c6f97ffcd,Namespace:kube-system,Attempt:0,} returns sandbox id \"08bea2dbeb92e02e8c40de7b7d5e2dc1b9fe5337cb86ce854e5e8a6c70a56200\"" Nov 1 00:22:30.271841 containerd[1713]: time="2025-11-01T00:22:30.271743152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-534d15dd10,Uid:3638a92b8513c4b564ee4419848f47ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"acfe298d0a9e1d96de53c0df0380416f23165b9f2816bb571d2e7c9ce9b9e3f1\"" Nov 1 00:22:30.272234 containerd[1713]: time="2025-11-01T00:22:30.272164357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-534d15dd10,Uid:f35f8d54f7ed8b8be0e1f10d9e6e99ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"5821ecba8cb1b32f4ea5792239e241df78c85ec7d1011a4dd3fe81ff6682abe8\"" Nov 1 00:22:30.282037 containerd[1713]: time="2025-11-01T00:22:30.281947369Z" level=info msg="CreateContainer within sandbox \"08bea2dbeb92e02e8c40de7b7d5e2dc1b9fe5337cb86ce854e5e8a6c70a56200\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:30.285626 containerd[1713]: time="2025-11-01T00:22:30.285600011Z" level=info msg="CreateContainer within sandbox \"acfe298d0a9e1d96de53c0df0380416f23165b9f2816bb571d2e7c9ce9b9e3f1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:30.290186 containerd[1713]: time="2025-11-01T00:22:30.289978361Z" level=info msg="CreateContainer within sandbox \"5821ecba8cb1b32f4ea5792239e241df78c85ec7d1011a4dd3fe81ff6682abe8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:30.301448 kubelet[2800]: E1101 00:22:30.301387 2800 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.40:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 1 00:22:30.373336 containerd[1713]: time="2025-11-01T00:22:30.373274818Z" level=info msg="CreateContainer within sandbox \"08bea2dbeb92e02e8c40de7b7d5e2dc1b9fe5337cb86ce854e5e8a6c70a56200\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"237304eb3b06622e5304cc37ac164cea065bee4cab8226a8574892025ff55312\"" Nov 1 00:22:30.373945 containerd[1713]: time="2025-11-01T00:22:30.373913626Z" level=info msg="StartContainer for \"237304eb3b06622e5304cc37ac164cea065bee4cab8226a8574892025ff55312\"" Nov 1 00:22:30.389326 containerd[1713]: time="2025-11-01T00:22:30.389288002Z" level=info msg="CreateContainer within sandbox \"acfe298d0a9e1d96de53c0df0380416f23165b9f2816bb571d2e7c9ce9b9e3f1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"96120c403dad52bb8a4bc8fb4a1493c6ee490a509e5bc536722981d54158c0b8\"" Nov 1 00:22:30.390087 containerd[1713]: time="2025-11-01T00:22:30.389930109Z" level=info msg="StartContainer for \"96120c403dad52bb8a4bc8fb4a1493c6ee490a509e5bc536722981d54158c0b8\"" Nov 1 00:22:30.393273 containerd[1713]: time="2025-11-01T00:22:30.393166447Z" level=info msg="CreateContainer within sandbox \"5821ecba8cb1b32f4ea5792239e241df78c85ec7d1011a4dd3fe81ff6682abe8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3c021e0efd6e7335f7a3c72aec69ea1c1f0d4b8236c08ab99e12a05afd6c02ec\"" Nov 1 00:22:30.397286 containerd[1713]: time="2025-11-01T00:22:30.395914178Z" level=info msg="StartContainer for \"3c021e0efd6e7335f7a3c72aec69ea1c1f0d4b8236c08ab99e12a05afd6c02ec\"" Nov 1 00:22:30.428747 systemd[1]: Started cri-containerd-237304eb3b06622e5304cc37ac164cea065bee4cab8226a8574892025ff55312.scope - libcontainer container 237304eb3b06622e5304cc37ac164cea065bee4cab8226a8574892025ff55312. Nov 1 00:22:30.449099 systemd[1]: Started cri-containerd-96120c403dad52bb8a4bc8fb4a1493c6ee490a509e5bc536722981d54158c0b8.scope - libcontainer container 96120c403dad52bb8a4bc8fb4a1493c6ee490a509e5bc536722981d54158c0b8. Nov 1 00:22:30.465740 systemd[1]: Started cri-containerd-3c021e0efd6e7335f7a3c72aec69ea1c1f0d4b8236c08ab99e12a05afd6c02ec.scope - libcontainer container 3c021e0efd6e7335f7a3c72aec69ea1c1f0d4b8236c08ab99e12a05afd6c02ec. Nov 1 00:22:30.528044 containerd[1713]: time="2025-11-01T00:22:30.527827494Z" level=info msg="StartContainer for \"237304eb3b06622e5304cc37ac164cea065bee4cab8226a8574892025ff55312\" returns successfully" Nov 1 00:22:30.550326 containerd[1713]: time="2025-11-01T00:22:30.550104750Z" level=info msg="StartContainer for \"96120c403dad52bb8a4bc8fb4a1493c6ee490a509e5bc536722981d54158c0b8\" returns successfully" Nov 1 00:22:30.572940 containerd[1713]: time="2025-11-01T00:22:30.572806810Z" level=info msg="StartContainer for \"3c021e0efd6e7335f7a3c72aec69ea1c1f0d4b8236c08ab99e12a05afd6c02ec\" returns successfully" Nov 1 00:22:31.380972 kubelet[2800]: E1101 00:22:31.380936 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:31.381474 kubelet[2800]: E1101 00:22:31.381347 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:31.383845 kubelet[2800]: E1101 00:22:31.383699 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:31.499233 kubelet[2800]: I1101 00:22:31.498454 2800 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.388557 kubelet[2800]: E1101 00:22:32.387222 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.388557 kubelet[2800]: E1101 00:22:32.388006 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.389288 kubelet[2800]: E1101 00:22:32.389142 2800 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.618737 kubelet[2800]: E1101 00:22:32.618701 2800 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-534d15dd10\" not found" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.664308 kubelet[2800]: I1101 00:22:32.664183 2800 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.664308 kubelet[2800]: E1101 00:22:32.664225 2800 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-534d15dd10\": node \"ci-4081.3.6-n-534d15dd10\" not found" Nov 1 00:22:32.684342 kubelet[2800]: I1101 00:22:32.684039 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.756812 kubelet[2800]: E1101 00:22:32.756773 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.757221 kubelet[2800]: I1101 00:22:32.756994 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.761309 kubelet[2800]: E1101 00:22:32.761127 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-534d15dd10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.761309 kubelet[2800]: I1101 00:22:32.761152 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:32.764062 kubelet[2800]: E1101 00:22:32.764036 2800 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:33.265774 kubelet[2800]: I1101 00:22:33.265733 2800 apiserver.go:52] "Watching apiserver" Nov 1 00:22:33.282305 kubelet[2800]: I1101 00:22:33.282273 2800 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:22:33.387024 kubelet[2800]: I1101 00:22:33.386984 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:33.387349 kubelet[2800]: I1101 00:22:33.387323 2800 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:33.400656 kubelet[2800]: I1101 00:22:33.400622 2800 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:33.405229 kubelet[2800]: I1101 00:22:33.405189 2800 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:34.879356 systemd[1]: Reloading requested from client PID 3086 ('systemctl') (unit session-9.scope)... Nov 1 00:22:34.879371 systemd[1]: Reloading... Nov 1 00:22:34.974572 zram_generator::config[3126]: No configuration found. Nov 1 00:22:35.102039 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 1 00:22:35.195467 systemd[1]: Reloading finished in 315 ms. Nov 1 00:22:35.246005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:35.269239 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:35.269841 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:35.280506 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:35.552179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:35.561882 (kubelet)[3194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:35.605435 kubelet[3194]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:35.605435 kubelet[3194]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.605467 3194 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.611155 3194 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.611175 3194 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.611200 3194 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.611213 3194 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.611680 3194 server.go:956] "Client rotation is on, will bootstrap in background" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.614313 3194 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.616216 3194 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:36.037118 kubelet[3194]: E1101 00:22:35.618771 3194 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.618806 3194 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.621815 3194 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 1 00:22:36.037118 kubelet[3194]: I1101 00:22:35.622013 3194 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:36.037757 kubelet[3194]: I1101 00:22:35.622038 3194 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-534d15dd10","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:36.037757 kubelet[3194]: I1101 00:22:35.622500 3194 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:36.037757 kubelet[3194]: I1101 00:22:35.622508 3194 container_manager_linux.go:306] "Creating device plugin manager" Nov 1 00:22:36.037757 kubelet[3194]: I1101 00:22:35.622547 3194 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 1 00:22:36.037757 kubelet[3194]: I1101 00:22:35.623372 3194 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:36.038093 kubelet[3194]: I1101 00:22:35.623501 3194 kubelet.go:475] "Attempting to sync node with API server" Nov 1 00:22:36.038093 kubelet[3194]: I1101 00:22:35.623516 3194 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:36.038093 kubelet[3194]: I1101 00:22:35.623597 3194 kubelet.go:387] "Adding apiserver pod source" Nov 1 00:22:36.038093 kubelet[3194]: I1101 00:22:35.623613 3194 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:36.045067 kubelet[3194]: I1101 00:22:36.042494 3194 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 1 00:22:36.045067 kubelet[3194]: I1101 00:22:36.043172 3194 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 1 00:22:36.045067 kubelet[3194]: I1101 00:22:36.043210 3194 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 1 00:22:36.051909 kubelet[3194]: I1101 00:22:36.051471 3194 server.go:1262] "Started kubelet" Nov 1 00:22:36.056514 kubelet[3194]: I1101 00:22:36.056379 3194 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:36.056626 kubelet[3194]: I1101 00:22:36.056558 3194 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 1 00:22:36.057020 kubelet[3194]: I1101 00:22:36.056821 3194 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:36.063379 kubelet[3194]: I1101 00:22:36.062756 3194 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:36.063379 kubelet[3194]: I1101 00:22:36.062890 3194 server.go:310] "Adding debug handlers to kubelet server" Nov 1 00:22:36.063379 kubelet[3194]: I1101 00:22:36.063236 3194 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:36.068814 kubelet[3194]: I1101 00:22:36.068791 3194 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:36.072067 kubelet[3194]: I1101 00:22:36.072045 3194 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 1 00:22:36.072300 kubelet[3194]: I1101 00:22:36.072286 3194 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 1 00:22:36.072489 kubelet[3194]: I1101 00:22:36.072476 3194 reconciler.go:29] "Reconciler: start to sync state" Nov 1 00:22:36.074287 kubelet[3194]: E1101 00:22:36.074179 3194 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:36.077416 kubelet[3194]: I1101 00:22:36.077397 3194 factory.go:223] Registration of the systemd container factory successfully Nov 1 00:22:36.078032 kubelet[3194]: I1101 00:22:36.077821 3194 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:36.081042 kubelet[3194]: I1101 00:22:36.080925 3194 factory.go:223] Registration of the containerd container factory successfully Nov 1 00:22:36.086596 kubelet[3194]: I1101 00:22:36.086568 3194 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:36.088131 kubelet[3194]: I1101 00:22:36.087780 3194 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:36.088131 kubelet[3194]: I1101 00:22:36.087800 3194 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 1 00:22:36.088131 kubelet[3194]: I1101 00:22:36.087825 3194 kubelet.go:2427] "Starting kubelet main sync loop" Nov 1 00:22:36.088131 kubelet[3194]: E1101 00:22:36.087870 3194 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:36.150660 kubelet[3194]: I1101 00:22:36.150630 3194 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:36.150660 kubelet[3194]: I1101 00:22:36.150653 3194 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:36.150847 kubelet[3194]: I1101 00:22:36.150675 3194 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:36.150847 kubelet[3194]: I1101 00:22:36.150809 3194 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:36.150847 kubelet[3194]: I1101 00:22:36.150822 3194 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:36.150847 kubelet[3194]: I1101 00:22:36.150841 3194 policy_none.go:49] "None policy: Start" Nov 1 00:22:36.151142 kubelet[3194]: I1101 00:22:36.150852 3194 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 1 00:22:36.151142 kubelet[3194]: I1101 00:22:36.150864 3194 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 1 00:22:36.151142 kubelet[3194]: I1101 00:22:36.151000 3194 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 1 00:22:36.151142 kubelet[3194]: I1101 00:22:36.151013 3194 policy_none.go:47] "Start" Nov 1 00:22:36.159162 kubelet[3194]: E1101 00:22:36.159126 3194 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 1 00:22:36.160555 kubelet[3194]: I1101 00:22:36.159304 3194 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:36.160555 kubelet[3194]: I1101 00:22:36.159323 3194 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:36.161439 kubelet[3194]: I1101 00:22:36.161076 3194 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:36.163498 kubelet[3194]: E1101 00:22:36.163476 3194 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:36.189220 kubelet[3194]: I1101 00:22:36.189147 3194 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.190308 kubelet[3194]: I1101 00:22:36.190082 3194 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.190416 kubelet[3194]: I1101 00:22:36.190363 3194 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.200038 kubelet[3194]: I1101 00:22:36.199872 3194 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:36.204088 kubelet[3194]: I1101 00:22:36.203811 3194 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:36.204521 kubelet[3194]: E1101 00:22:36.204429 3194 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.204521 kubelet[3194]: I1101 00:22:36.204345 3194 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:36.204521 kubelet[3194]: E1101 00:22:36.204490 3194 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-534d15dd10\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.265630 kubelet[3194]: I1101 00:22:36.265592 3194 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274271 kubelet[3194]: I1101 00:22:36.274224 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274271 kubelet[3194]: I1101 00:22:36.274266 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274507 kubelet[3194]: I1101 00:22:36.274293 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274507 kubelet[3194]: I1101 00:22:36.274318 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274507 kubelet[3194]: I1101 00:22:36.274339 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/15930ecc3b5819895d44714c6f97ffcd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" (UID: \"15930ecc3b5819895d44714c6f97ffcd\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274507 kubelet[3194]: I1101 00:22:36.274358 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274507 kubelet[3194]: I1101 00:22:36.274380 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274752 kubelet[3194]: I1101 00:22:36.274403 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f35f8d54f7ed8b8be0e1f10d9e6e99ca-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-534d15dd10\" (UID: \"f35f8d54f7ed8b8be0e1f10d9e6e99ca\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.274752 kubelet[3194]: I1101 00:22:36.274423 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3638a92b8513c4b564ee4419848f47ee-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-534d15dd10\" (UID: \"3638a92b8513c4b564ee4419848f47ee\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.281482 kubelet[3194]: I1101 00:22:36.281099 3194 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.281482 kubelet[3194]: I1101 00:22:36.281177 3194 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-534d15dd10" Nov 1 00:22:36.624811 kubelet[3194]: I1101 00:22:36.624763 3194 apiserver.go:52] "Watching apiserver" Nov 1 00:22:36.672855 kubelet[3194]: I1101 00:22:36.672812 3194 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 1 00:22:37.128570 kubelet[3194]: I1101 00:22:37.128402 3194 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:37.128843 kubelet[3194]: I1101 00:22:37.128816 3194 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:37.136260 kubelet[3194]: I1101 00:22:37.136208 3194 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:37.136561 kubelet[3194]: E1101 00:22:37.136402 3194 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-534d15dd10\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:37.140521 kubelet[3194]: I1101 00:22:37.140325 3194 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Nov 1 00:22:37.140521 kubelet[3194]: E1101 00:22:37.140371 3194 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-534d15dd10\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" Nov 1 00:22:37.178745 kubelet[3194]: I1101 00:22:37.178331 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-534d15dd10" podStartSLOduration=4.178314408 podStartE2EDuration="4.178314408s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:37.167200167 +0000 UTC m=+1.601320078" watchObservedRunningTime="2025-11-01 00:22:37.178314408 +0000 UTC m=+1.612434319" Nov 1 00:22:37.178745 kubelet[3194]: I1101 00:22:37.178525 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-534d15dd10" podStartSLOduration=4.178513411 podStartE2EDuration="4.178513411s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:37.178501211 +0000 UTC m=+1.612621122" watchObservedRunningTime="2025-11-01 00:22:37.178513411 +0000 UTC m=+1.612633322" Nov 1 00:22:37.189386 kubelet[3194]: I1101 00:22:37.189318 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-534d15dd10" podStartSLOduration=1.189301548 podStartE2EDuration="1.189301548s" podCreationTimestamp="2025-11-01 00:22:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:37.189155046 +0000 UTC m=+1.623275057" watchObservedRunningTime="2025-11-01 00:22:37.189301548 +0000 UTC m=+1.623421459" Nov 1 00:22:40.219969 kubelet[3194]: I1101 00:22:40.219928 3194 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:40.221914 containerd[1713]: time="2025-11-01T00:22:40.221874639Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:40.222280 kubelet[3194]: I1101 00:22:40.222111 3194 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:41.290650 systemd[1]: Created slice kubepods-besteffort-pod62b89435_7c73_4338_baf7_0386082b28a2.slice - libcontainer container kubepods-besteffort-pod62b89435_7c73_4338_baf7_0386082b28a2.slice. Nov 1 00:22:41.404001 kubelet[3194]: I1101 00:22:41.403948 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62b89435-7c73-4338-baf7-0386082b28a2-xtables-lock\") pod \"kube-proxy-tbjmx\" (UID: \"62b89435-7c73-4338-baf7-0386082b28a2\") " pod="kube-system/kube-proxy-tbjmx" Nov 1 00:22:41.404460 kubelet[3194]: I1101 00:22:41.404024 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/62b89435-7c73-4338-baf7-0386082b28a2-kube-proxy\") pod \"kube-proxy-tbjmx\" (UID: \"62b89435-7c73-4338-baf7-0386082b28a2\") " pod="kube-system/kube-proxy-tbjmx" Nov 1 00:22:41.404460 kubelet[3194]: I1101 00:22:41.404063 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62b89435-7c73-4338-baf7-0386082b28a2-lib-modules\") pod \"kube-proxy-tbjmx\" (UID: \"62b89435-7c73-4338-baf7-0386082b28a2\") " pod="kube-system/kube-proxy-tbjmx" Nov 1 00:22:41.404460 kubelet[3194]: I1101 00:22:41.404083 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbr9j\" (UniqueName: \"kubernetes.io/projected/62b89435-7c73-4338-baf7-0386082b28a2-kube-api-access-fbr9j\") pod \"kube-proxy-tbjmx\" (UID: \"62b89435-7c73-4338-baf7-0386082b28a2\") " pod="kube-system/kube-proxy-tbjmx" Nov 1 00:22:41.495033 systemd[1]: Created slice kubepods-besteffort-podc114a984_89f1_47e8_9d48_1a14cbde9e80.slice - libcontainer container kubepods-besteffort-podc114a984_89f1_47e8_9d48_1a14cbde9e80.slice. Nov 1 00:22:41.505182 kubelet[3194]: I1101 00:22:41.505142 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmkv2\" (UniqueName: \"kubernetes.io/projected/c114a984-89f1-47e8-9d48-1a14cbde9e80-kube-api-access-cmkv2\") pod \"tigera-operator-65cdcdfd6d-szsrw\" (UID: \"c114a984-89f1-47e8-9d48-1a14cbde9e80\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-szsrw" Nov 1 00:22:41.505338 kubelet[3194]: I1101 00:22:41.505225 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c114a984-89f1-47e8-9d48-1a14cbde9e80-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-szsrw\" (UID: \"c114a984-89f1-47e8-9d48-1a14cbde9e80\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-szsrw" Nov 1 00:22:41.605603 containerd[1713]: time="2025-11-01T00:22:41.605455063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbjmx,Uid:62b89435-7c73-4338-baf7-0386082b28a2,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.660885 containerd[1713]: time="2025-11-01T00:22:41.660782785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:41.660885 containerd[1713]: time="2025-11-01T00:22:41.660831186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:41.660885 containerd[1713]: time="2025-11-01T00:22:41.660851886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.661698 containerd[1713]: time="2025-11-01T00:22:41.660946987Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:41.690704 systemd[1]: Started cri-containerd-55255f088176aa5bb28487a79237a5a962e5c07ed1284c53efc82eb66dc5e2d6.scope - libcontainer container 55255f088176aa5bb28487a79237a5a962e5c07ed1284c53efc82eb66dc5e2d6. Nov 1 00:22:41.714050 containerd[1713]: time="2025-11-01T00:22:41.714002580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tbjmx,Uid:62b89435-7c73-4338-baf7-0386082b28a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"55255f088176aa5bb28487a79237a5a962e5c07ed1284c53efc82eb66dc5e2d6\"" Nov 1 00:22:41.726561 containerd[1713]: time="2025-11-01T00:22:41.725154826Z" level=info msg="CreateContainer within sandbox \"55255f088176aa5bb28487a79237a5a962e5c07ed1284c53efc82eb66dc5e2d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:43.035240 containerd[1713]: time="2025-11-01T00:22:43.035158732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-szsrw,Uid:c114a984-89f1-47e8-9d48-1a14cbde9e80,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:43.073201 containerd[1713]: time="2025-11-01T00:22:43.073147428Z" level=info msg="CreateContainer within sandbox \"55255f088176aa5bb28487a79237a5a962e5c07ed1284c53efc82eb66dc5e2d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"612fd39611e1df8c0de74cc8698be5dfb6aa7837c86a05d70d14ed6991399c0c\"" Nov 1 00:22:43.074209 containerd[1713]: time="2025-11-01T00:22:43.073917239Z" level=info msg="StartContainer for \"612fd39611e1df8c0de74cc8698be5dfb6aa7837c86a05d70d14ed6991399c0c\"" Nov 1 00:22:43.117831 systemd[1]: Started cri-containerd-612fd39611e1df8c0de74cc8698be5dfb6aa7837c86a05d70d14ed6991399c0c.scope - libcontainer container 612fd39611e1df8c0de74cc8698be5dfb6aa7837c86a05d70d14ed6991399c0c. Nov 1 00:22:43.141424 containerd[1713]: time="2025-11-01T00:22:43.141331719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:43.141424 containerd[1713]: time="2025-11-01T00:22:43.141435120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:43.141424 containerd[1713]: time="2025-11-01T00:22:43.141476421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.142835 containerd[1713]: time="2025-11-01T00:22:43.142120229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:43.174760 systemd[1]: Started cri-containerd-ae4ea64b23c8921a5d8f7832d92c83e9d95300935bc5a3243826b6b57388f2da.scope - libcontainer container ae4ea64b23c8921a5d8f7832d92c83e9d95300935bc5a3243826b6b57388f2da. Nov 1 00:22:43.183399 containerd[1713]: time="2025-11-01T00:22:43.183336467Z" level=info msg="StartContainer for \"612fd39611e1df8c0de74cc8698be5dfb6aa7837c86a05d70d14ed6991399c0c\" returns successfully" Nov 1 00:22:43.234751 containerd[1713]: time="2025-11-01T00:22:43.234684038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-szsrw,Uid:c114a984-89f1-47e8-9d48-1a14cbde9e80,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ae4ea64b23c8921a5d8f7832d92c83e9d95300935bc5a3243826b6b57388f2da\"" Nov 1 00:22:43.237319 containerd[1713]: time="2025-11-01T00:22:43.237286172Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:44.170391 kubelet[3194]: I1101 00:22:44.170316 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tbjmx" podStartSLOduration=3.170191254 podStartE2EDuration="3.170191254s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:44.170009552 +0000 UTC m=+8.604129563" watchObservedRunningTime="2025-11-01 00:22:44.170191254 +0000 UTC m=+8.604311165" Nov 1 00:22:44.771926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2264826262.mount: Deactivated successfully. Nov 1 00:22:45.442881 containerd[1713]: time="2025-11-01T00:22:45.442831673Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.450870 containerd[1713]: time="2025-11-01T00:22:45.450681475Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:45.454894 containerd[1713]: time="2025-11-01T00:22:45.453718415Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.458928 containerd[1713]: time="2025-11-01T00:22:45.458080872Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:45.458928 containerd[1713]: time="2025-11-01T00:22:45.458783581Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.221453809s" Nov 1 00:22:45.458928 containerd[1713]: time="2025-11-01T00:22:45.458820382Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:45.465700 containerd[1713]: time="2025-11-01T00:22:45.465666671Z" level=info msg="CreateContainer within sandbox \"ae4ea64b23c8921a5d8f7832d92c83e9d95300935bc5a3243826b6b57388f2da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:45.499820 containerd[1713]: time="2025-11-01T00:22:45.499779716Z" level=info msg="CreateContainer within sandbox \"ae4ea64b23c8921a5d8f7832d92c83e9d95300935bc5a3243826b6b57388f2da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1\"" Nov 1 00:22:45.500797 containerd[1713]: time="2025-11-01T00:22:45.500349824Z" level=info msg="StartContainer for \"0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1\"" Nov 1 00:22:45.534624 systemd[1]: run-containerd-runc-k8s.io-0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1-runc.qr9I5b.mount: Deactivated successfully. Nov 1 00:22:45.545710 systemd[1]: Started cri-containerd-0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1.scope - libcontainer container 0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1. Nov 1 00:22:45.572684 containerd[1713]: time="2025-11-01T00:22:45.572638168Z" level=info msg="StartContainer for \"0cadc879164a38d20930b67d4565054654cdcad463250b4a015b7fffaa799ff1\" returns successfully" Nov 1 00:22:46.177562 kubelet[3194]: I1101 00:22:46.175919 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-szsrw" podStartSLOduration=2.952033505 podStartE2EDuration="5.175523841s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="2025-11-01 00:22:43.236482461 +0000 UTC m=+7.670602372" lastFinishedPulling="2025-11-01 00:22:45.459972797 +0000 UTC m=+9.894092708" observedRunningTime="2025-11-01 00:22:46.174776931 +0000 UTC m=+10.608896842" watchObservedRunningTime="2025-11-01 00:22:46.175523841 +0000 UTC m=+10.609643752" Nov 1 00:22:51.899157 sudo[2248]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:52.004803 sshd[2245]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:52.008966 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:42042.service: Deactivated successfully. Nov 1 00:22:52.013028 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:22:52.013556 systemd[1]: session-9.scope: Consumed 4.931s CPU time, 160.4M memory peak, 0B memory swap peak. Nov 1 00:22:52.015806 systemd-logind[1692]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:22:52.017403 systemd-logind[1692]: Removed session 9. Nov 1 00:22:57.685100 systemd[1]: Created slice kubepods-besteffort-pod2fe3541e_1496_4036_a569_3b6e71f1300e.slice - libcontainer container kubepods-besteffort-pod2fe3541e_1496_4036_a569_3b6e71f1300e.slice. Nov 1 00:22:57.704222 kubelet[3194]: I1101 00:22:57.704180 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2fe3541e-1496-4036-a569-3b6e71f1300e-typha-certs\") pod \"calico-typha-5c9bfb7dcc-9vztk\" (UID: \"2fe3541e-1496-4036-a569-3b6e71f1300e\") " pod="calico-system/calico-typha-5c9bfb7dcc-9vztk" Nov 1 00:22:57.704982 kubelet[3194]: I1101 00:22:57.704835 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2fe3541e-1496-4036-a569-3b6e71f1300e-tigera-ca-bundle\") pod \"calico-typha-5c9bfb7dcc-9vztk\" (UID: \"2fe3541e-1496-4036-a569-3b6e71f1300e\") " pod="calico-system/calico-typha-5c9bfb7dcc-9vztk" Nov 1 00:22:57.704982 kubelet[3194]: I1101 00:22:57.704922 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvk4z\" (UniqueName: \"kubernetes.io/projected/2fe3541e-1496-4036-a569-3b6e71f1300e-kube-api-access-cvk4z\") pod \"calico-typha-5c9bfb7dcc-9vztk\" (UID: \"2fe3541e-1496-4036-a569-3b6e71f1300e\") " pod="calico-system/calico-typha-5c9bfb7dcc-9vztk" Nov 1 00:22:57.858841 systemd[1]: Created slice kubepods-besteffort-pod76f4c869_c63f_4749_b17a_17d16186776c.slice - libcontainer container kubepods-besteffort-pod76f4c869_c63f_4749_b17a_17d16186776c.slice. Nov 1 00:22:57.906254 kubelet[3194]: I1101 00:22:57.906213 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-xtables-lock\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906254 kubelet[3194]: I1101 00:22:57.906254 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-cni-log-dir\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906473 kubelet[3194]: I1101 00:22:57.906275 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-flexvol-driver-host\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906473 kubelet[3194]: I1101 00:22:57.906302 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-var-lib-calico\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906473 kubelet[3194]: I1101 00:22:57.906323 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-var-run-calico\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906473 kubelet[3194]: I1101 00:22:57.906343 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-cni-bin-dir\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906473 kubelet[3194]: I1101 00:22:57.906360 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/76f4c869-c63f-4749-b17a-17d16186776c-node-certs\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906720 kubelet[3194]: I1101 00:22:57.906378 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-policysync\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906720 kubelet[3194]: I1101 00:22:57.906408 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-lib-modules\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906720 kubelet[3194]: I1101 00:22:57.906427 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76f4c869-c63f-4749-b17a-17d16186776c-tigera-ca-bundle\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906720 kubelet[3194]: I1101 00:22:57.906452 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/76f4c869-c63f-4749-b17a-17d16186776c-cni-net-dir\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.906720 kubelet[3194]: I1101 00:22:57.906472 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kltdv\" (UniqueName: \"kubernetes.io/projected/76f4c869-c63f-4749-b17a-17d16186776c-kube-api-access-kltdv\") pod \"calico-node-jg4jd\" (UID: \"76f4c869-c63f-4749-b17a-17d16186776c\") " pod="calico-system/calico-node-jg4jd" Nov 1 00:22:57.998820 containerd[1713]: time="2025-11-01T00:22:57.998453769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9bfb7dcc-9vztk,Uid:2fe3541e-1496-4036-a569-3b6e71f1300e,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:58.009214 kubelet[3194]: E1101 00:22:58.008748 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.009214 kubelet[3194]: W1101 00:22:58.008774 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.009214 kubelet[3194]: E1101 00:22:58.008798 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.009214 kubelet[3194]: E1101 00:22:58.009116 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.009614 kubelet[3194]: W1101 00:22:58.009129 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.009707 kubelet[3194]: E1101 00:22:58.009623 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.011097 kubelet[3194]: E1101 00:22:58.011075 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.011097 kubelet[3194]: W1101 00:22:58.011096 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.011223 kubelet[3194]: E1101 00:22:58.011113 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.012170 kubelet[3194]: E1101 00:22:58.012085 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.012250 kubelet[3194]: W1101 00:22:58.012101 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.012250 kubelet[3194]: E1101 00:22:58.012226 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.012909 kubelet[3194]: E1101 00:22:58.012858 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.012909 kubelet[3194]: W1101 00:22:58.012907 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.013033 kubelet[3194]: E1101 00:22:58.012922 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.013233 kubelet[3194]: E1101 00:22:58.013213 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.013233 kubelet[3194]: W1101 00:22:58.013231 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.013348 kubelet[3194]: E1101 00:22:58.013245 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.013518 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.014627 kubelet[3194]: W1101 00:22:58.013564 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.013581 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.013909 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.014627 kubelet[3194]: W1101 00:22:58.013921 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.013934 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.014251 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.014627 kubelet[3194]: W1101 00:22:58.014262 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.014627 kubelet[3194]: E1101 00:22:58.014275 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.015026 kubelet[3194]: E1101 00:22:58.014664 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.015026 kubelet[3194]: W1101 00:22:58.014676 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.015026 kubelet[3194]: E1101 00:22:58.014691 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.015026 kubelet[3194]: E1101 00:22:58.015019 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.015200 kubelet[3194]: W1101 00:22:58.015030 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.015200 kubelet[3194]: E1101 00:22:58.015043 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.016654 kubelet[3194]: E1101 00:22:58.015350 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.016654 kubelet[3194]: W1101 00:22:58.015367 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.016654 kubelet[3194]: E1101 00:22:58.015405 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.016654 kubelet[3194]: E1101 00:22:58.016250 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.016654 kubelet[3194]: W1101 00:22:58.016264 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.016654 kubelet[3194]: E1101 00:22:58.016277 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.017158 kubelet[3194]: E1101 00:22:58.017015 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.017158 kubelet[3194]: W1101 00:22:58.017027 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.017158 kubelet[3194]: E1101 00:22:58.017056 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017315 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020479 kubelet[3194]: W1101 00:22:58.017329 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017341 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017630 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020479 kubelet[3194]: W1101 00:22:58.017642 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017664 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017890 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020479 kubelet[3194]: W1101 00:22:58.017900 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.017913 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020479 kubelet[3194]: E1101 00:22:58.018148 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020944 kubelet[3194]: W1101 00:22:58.018158 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020944 kubelet[3194]: E1101 00:22:58.018168 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020944 kubelet[3194]: E1101 00:22:58.019773 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020944 kubelet[3194]: W1101 00:22:58.019785 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020944 kubelet[3194]: E1101 00:22:58.019798 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.020944 kubelet[3194]: E1101 00:22:58.020203 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.020944 kubelet[3194]: W1101 00:22:58.020214 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.020944 kubelet[3194]: E1101 00:22:58.020227 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.026676 kubelet[3194]: E1101 00:22:58.026658 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.026797 kubelet[3194]: W1101 00:22:58.026781 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.026874 kubelet[3194]: E1101 00:22:58.026862 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.027614 kubelet[3194]: E1101 00:22:58.027590 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.027781 kubelet[3194]: W1101 00:22:58.027766 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.028141 kubelet[3194]: E1101 00:22:58.027894 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.028267 kubelet[3194]: E1101 00:22:58.028255 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.028342 kubelet[3194]: W1101 00:22:58.028330 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.028406 kubelet[3194]: E1101 00:22:58.028396 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.028739 kubelet[3194]: E1101 00:22:58.028724 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.028832 kubelet[3194]: W1101 00:22:58.028818 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.029007 kubelet[3194]: E1101 00:22:58.028902 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.030799 kubelet[3194]: E1101 00:22:58.030779 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.030799 kubelet[3194]: W1101 00:22:58.030798 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.030954 kubelet[3194]: E1101 00:22:58.030813 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.031048 kubelet[3194]: E1101 00:22:58.031031 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.031099 kubelet[3194]: W1101 00:22:58.031048 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.031099 kubelet[3194]: E1101 00:22:58.031062 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.053709 kubelet[3194]: E1101 00:22:58.053660 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:22:58.070951 kubelet[3194]: E1101 00:22:58.068926 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.070951 kubelet[3194]: W1101 00:22:58.068956 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.070951 kubelet[3194]: E1101 00:22:58.068984 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.080596 containerd[1713]: time="2025-11-01T00:22:58.080293487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:58.080596 containerd[1713]: time="2025-11-01T00:22:58.080354488Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:58.080596 containerd[1713]: time="2025-11-01T00:22:58.080373088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:58.080596 containerd[1713]: time="2025-11-01T00:22:58.080462289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:58.108206 kubelet[3194]: E1101 00:22:58.107103 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.108206 kubelet[3194]: W1101 00:22:58.107132 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.108206 kubelet[3194]: E1101 00:22:58.107162 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.108206 kubelet[3194]: E1101 00:22:58.107743 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.108206 kubelet[3194]: W1101 00:22:58.107760 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.108206 kubelet[3194]: E1101 00:22:58.107779 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.109354 kubelet[3194]: E1101 00:22:58.109331 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.109354 kubelet[3194]: W1101 00:22:58.109353 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.109667 kubelet[3194]: E1101 00:22:58.109391 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.109793 kubelet[3194]: E1101 00:22:58.109699 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.109793 kubelet[3194]: W1101 00:22:58.109712 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.109793 kubelet[3194]: E1101 00:22:58.109728 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.112020 kubelet[3194]: E1101 00:22:58.110252 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.112020 kubelet[3194]: W1101 00:22:58.110269 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.112020 kubelet[3194]: E1101 00:22:58.110286 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.112020 kubelet[3194]: E1101 00:22:58.111768 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.112020 kubelet[3194]: W1101 00:22:58.111780 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.112020 kubelet[3194]: E1101 00:22:58.111796 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.112020 kubelet[3194]: E1101 00:22:58.112010 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.112020 kubelet[3194]: W1101 00:22:58.112021 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.112033 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.112241 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.116570 kubelet[3194]: W1101 00:22:58.112252 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.112264 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.112496 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.116570 kubelet[3194]: W1101 00:22:58.112506 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.112518 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.113050 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.116570 kubelet[3194]: W1101 00:22:58.113062 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.116570 kubelet[3194]: E1101 00:22:58.113074 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.113677 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.117687 kubelet[3194]: W1101 00:22:58.113691 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.113705 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.114022 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.117687 kubelet[3194]: W1101 00:22:58.114034 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.114047 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.114529 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.117687 kubelet[3194]: W1101 00:22:58.114560 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.114573 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.117687 kubelet[3194]: E1101 00:22:58.116588 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.118348 kubelet[3194]: W1101 00:22:58.116601 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.118348 kubelet[3194]: E1101 00:22:58.116615 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.118348 kubelet[3194]: E1101 00:22:58.117840 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.118348 kubelet[3194]: W1101 00:22:58.117852 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.118348 kubelet[3194]: E1101 00:22:58.117866 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.120407 kubelet[3194]: E1101 00:22:58.120387 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.120407 kubelet[3194]: W1101 00:22:58.120406 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.121621 kubelet[3194]: E1101 00:22:58.120421 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.121621 kubelet[3194]: E1101 00:22:58.121360 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.121621 kubelet[3194]: W1101 00:22:58.121372 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.121621 kubelet[3194]: E1101 00:22:58.121388 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.120745 systemd[1]: Started cri-containerd-c2d21d8c23889f5ed1d19469d8784151249ccd759c91774547bd2b8e6249fc2c.scope - libcontainer container c2d21d8c23889f5ed1d19469d8784151249ccd759c91774547bd2b8e6249fc2c. Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.122325 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.123945 kubelet[3194]: W1101 00:22:58.122341 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.122354 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.123093 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.123945 kubelet[3194]: W1101 00:22:58.123105 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.123148 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.123361 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.123945 kubelet[3194]: W1101 00:22:58.123371 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.123945 kubelet[3194]: E1101 00:22:58.123383 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.124381 kubelet[3194]: E1101 00:22:58.123957 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.124381 kubelet[3194]: W1101 00:22:58.123970 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.124381 kubelet[3194]: E1101 00:22:58.123985 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.124381 kubelet[3194]: I1101 00:22:58.124015 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/763cf2c8-d06c-456e-8d46-4720620695a1-socket-dir\") pod \"csi-node-driver-trnvf\" (UID: \"763cf2c8-d06c-456e-8d46-4720620695a1\") " pod="calico-system/csi-node-driver-trnvf" Nov 1 00:22:58.125686 kubelet[3194]: E1101 00:22:58.124686 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.125686 kubelet[3194]: W1101 00:22:58.124701 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.125686 kubelet[3194]: E1101 00:22:58.124819 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.125686 kubelet[3194]: I1101 00:22:58.124857 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/763cf2c8-d06c-456e-8d46-4720620695a1-varrun\") pod \"csi-node-driver-trnvf\" (UID: \"763cf2c8-d06c-456e-8d46-4720620695a1\") " pod="calico-system/csi-node-driver-trnvf" Nov 1 00:22:58.125686 kubelet[3194]: E1101 00:22:58.125479 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.125686 kubelet[3194]: W1101 00:22:58.125492 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.125686 kubelet[3194]: E1101 00:22:58.125506 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.126731 kubelet[3194]: E1101 00:22:58.126048 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.126731 kubelet[3194]: W1101 00:22:58.126060 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.126731 kubelet[3194]: E1101 00:22:58.126328 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.126731 kubelet[3194]: E1101 00:22:58.126696 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.126731 kubelet[3194]: W1101 00:22:58.126708 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.126731 kubelet[3194]: E1101 00:22:58.126721 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.127404 kubelet[3194]: I1101 00:22:58.126971 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/763cf2c8-d06c-456e-8d46-4720620695a1-kubelet-dir\") pod \"csi-node-driver-trnvf\" (UID: \"763cf2c8-d06c-456e-8d46-4720620695a1\") " pod="calico-system/csi-node-driver-trnvf" Nov 1 00:22:58.127611 kubelet[3194]: E1101 00:22:58.127598 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.127666 kubelet[3194]: W1101 00:22:58.127614 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.127666 kubelet[3194]: E1101 00:22:58.127628 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.127871 kubelet[3194]: I1101 00:22:58.127838 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/763cf2c8-d06c-456e-8d46-4720620695a1-registration-dir\") pod \"csi-node-driver-trnvf\" (UID: \"763cf2c8-d06c-456e-8d46-4720620695a1\") " pod="calico-system/csi-node-driver-trnvf" Nov 1 00:22:58.128265 kubelet[3194]: E1101 00:22:58.128212 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.128265 kubelet[3194]: W1101 00:22:58.128250 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.128265 kubelet[3194]: E1101 00:22:58.128265 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.128928 kubelet[3194]: E1101 00:22:58.128900 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.128928 kubelet[3194]: W1101 00:22:58.128919 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.129044 kubelet[3194]: E1101 00:22:58.128933 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.129650 kubelet[3194]: E1101 00:22:58.129620 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.129650 kubelet[3194]: W1101 00:22:58.129637 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.129770 kubelet[3194]: E1101 00:22:58.129667 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.129770 kubelet[3194]: I1101 00:22:58.129717 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pnx\" (UniqueName: \"kubernetes.io/projected/763cf2c8-d06c-456e-8d46-4720620695a1-kube-api-access-f9pnx\") pod \"csi-node-driver-trnvf\" (UID: \"763cf2c8-d06c-456e-8d46-4720620695a1\") " pod="calico-system/csi-node-driver-trnvf" Nov 1 00:22:58.130211 kubelet[3194]: E1101 00:22:58.130101 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.130211 kubelet[3194]: W1101 00:22:58.130115 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.130211 kubelet[3194]: E1101 00:22:58.130128 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.130706 kubelet[3194]: E1101 00:22:58.130684 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.130706 kubelet[3194]: W1101 00:22:58.130703 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.130809 kubelet[3194]: E1101 00:22:58.130720 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.131507 kubelet[3194]: E1101 00:22:58.131199 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.131507 kubelet[3194]: W1101 00:22:58.131216 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.131507 kubelet[3194]: E1101 00:22:58.131229 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.132008 kubelet[3194]: E1101 00:22:58.131985 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.132008 kubelet[3194]: W1101 00:22:58.132005 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.132117 kubelet[3194]: E1101 00:22:58.132019 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.132558 kubelet[3194]: E1101 00:22:58.132301 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.132558 kubelet[3194]: W1101 00:22:58.132315 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.132558 kubelet[3194]: E1101 00:22:58.132364 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.132821 kubelet[3194]: E1101 00:22:58.132716 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.132821 kubelet[3194]: W1101 00:22:58.132728 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.132821 kubelet[3194]: E1101 00:22:58.132742 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.172843 containerd[1713]: time="2025-11-01T00:22:58.172783338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg4jd,Uid:76f4c869-c63f-4749-b17a-17d16186776c,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:58.201105 containerd[1713]: time="2025-11-01T00:22:58.201062889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c9bfb7dcc-9vztk,Uid:2fe3541e-1496-4036-a569-3b6e71f1300e,Namespace:calico-system,Attempt:0,} returns sandbox id \"c2d21d8c23889f5ed1d19469d8784151249ccd759c91774547bd2b8e6249fc2c\"" Nov 1 00:22:58.202794 containerd[1713]: time="2025-11-01T00:22:58.202748510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:22:58.232060 containerd[1713]: time="2025-11-01T00:22:58.231842372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:22:58.232060 containerd[1713]: time="2025-11-01T00:22:58.231984474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:22:58.232411 containerd[1713]: time="2025-11-01T00:22:58.232170276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:58.232411 containerd[1713]: time="2025-11-01T00:22:58.232340379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:22:58.233137 kubelet[3194]: E1101 00:22:58.233108 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.233685 kubelet[3194]: W1101 00:22:58.233140 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.233685 kubelet[3194]: E1101 00:22:58.233516 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.234589 kubelet[3194]: E1101 00:22:58.234568 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.234685 kubelet[3194]: W1101 00:22:58.234590 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.234685 kubelet[3194]: E1101 00:22:58.234609 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.235404 kubelet[3194]: E1101 00:22:58.235357 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.235404 kubelet[3194]: W1101 00:22:58.235376 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.235404 kubelet[3194]: E1101 00:22:58.235392 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.236282 kubelet[3194]: E1101 00:22:58.236230 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.236282 kubelet[3194]: W1101 00:22:58.236245 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.236282 kubelet[3194]: E1101 00:22:58.236259 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.236806 kubelet[3194]: E1101 00:22:58.236769 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.237044 kubelet[3194]: W1101 00:22:58.236894 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.237044 kubelet[3194]: E1101 00:22:58.236913 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.237770 kubelet[3194]: E1101 00:22:58.237718 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.237770 kubelet[3194]: W1101 00:22:58.237768 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.237871 kubelet[3194]: E1101 00:22:58.237784 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.238328 kubelet[3194]: E1101 00:22:58.238223 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.238328 kubelet[3194]: W1101 00:22:58.238240 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.238328 kubelet[3194]: E1101 00:22:58.238255 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.239189 kubelet[3194]: E1101 00:22:58.238873 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.239189 kubelet[3194]: W1101 00:22:58.238888 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.239189 kubelet[3194]: E1101 00:22:58.238901 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.239758 kubelet[3194]: E1101 00:22:58.239734 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.239758 kubelet[3194]: W1101 00:22:58.239753 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.240660 kubelet[3194]: E1101 00:22:58.239766 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.240660 kubelet[3194]: E1101 00:22:58.240180 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.240660 kubelet[3194]: W1101 00:22:58.240224 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.240660 kubelet[3194]: E1101 00:22:58.240237 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.240914 kubelet[3194]: E1101 00:22:58.240897 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.241343 kubelet[3194]: W1101 00:22:58.240990 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.241421 kubelet[3194]: E1101 00:22:58.241361 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.242067 kubelet[3194]: E1101 00:22:58.241674 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.242067 kubelet[3194]: W1101 00:22:58.241689 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.242067 kubelet[3194]: E1101 00:22:58.241720 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.242513 kubelet[3194]: E1101 00:22:58.242393 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.242513 kubelet[3194]: W1101 00:22:58.242406 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.242513 kubelet[3194]: E1101 00:22:58.242419 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.243043 kubelet[3194]: E1101 00:22:58.242736 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.243043 kubelet[3194]: W1101 00:22:58.242751 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.243043 kubelet[3194]: E1101 00:22:58.242851 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.243469 kubelet[3194]: E1101 00:22:58.243448 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.243469 kubelet[3194]: W1101 00:22:58.243463 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.243686 kubelet[3194]: E1101 00:22:58.243477 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.244847 kubelet[3194]: E1101 00:22:58.244827 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.244847 kubelet[3194]: W1101 00:22:58.244843 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.244956 kubelet[3194]: E1101 00:22:58.244856 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.245185 kubelet[3194]: E1101 00:22:58.245166 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.245185 kubelet[3194]: W1101 00:22:58.245180 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.245288 kubelet[3194]: E1101 00:22:58.245194 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.245525 kubelet[3194]: E1101 00:22:58.245507 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.245525 kubelet[3194]: W1101 00:22:58.245520 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.245656 kubelet[3194]: E1101 00:22:58.245542 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.245847 kubelet[3194]: E1101 00:22:58.245828 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.245847 kubelet[3194]: W1101 00:22:58.245841 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.245958 kubelet[3194]: E1101 00:22:58.245854 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.246151 kubelet[3194]: E1101 00:22:58.246132 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.246151 kubelet[3194]: W1101 00:22:58.246150 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.246258 kubelet[3194]: E1101 00:22:58.246164 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.246607 kubelet[3194]: E1101 00:22:58.246589 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.246607 kubelet[3194]: W1101 00:22:58.246602 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.246730 kubelet[3194]: E1101 00:22:58.246615 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.248697 kubelet[3194]: E1101 00:22:58.248677 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.248697 kubelet[3194]: W1101 00:22:58.248692 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.248875 kubelet[3194]: E1101 00:22:58.248706 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.249252 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.252185 kubelet[3194]: W1101 00:22:58.249267 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.249279 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.249599 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.252185 kubelet[3194]: W1101 00:22:58.249610 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.249643 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.250042 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.252185 kubelet[3194]: W1101 00:22:58.250057 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.252185 kubelet[3194]: E1101 00:22:58.250069 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.271267 systemd[1]: Started cri-containerd-6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153.scope - libcontainer container 6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153. Nov 1 00:22:58.272183 kubelet[3194]: E1101 00:22:58.271885 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:58.272183 kubelet[3194]: W1101 00:22:58.271903 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:58.272183 kubelet[3194]: E1101 00:22:58.272076 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:58.306356 containerd[1713]: time="2025-11-01T00:22:58.305792892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-jg4jd,Uid:76f4c869-c63f-4749-b17a-17d16186776c,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\"" Nov 1 00:22:59.489228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5645072.mount: Deactivated successfully. Nov 1 00:23:00.090132 kubelet[3194]: E1101 00:23:00.089590 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:00.743638 containerd[1713]: time="2025-11-01T00:23:00.743584520Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.746425 containerd[1713]: time="2025-11-01T00:23:00.746367254Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:23:00.748875 containerd[1713]: time="2025-11-01T00:23:00.748825485Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.753215 containerd[1713]: time="2025-11-01T00:23:00.753169139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:00.753921 containerd[1713]: time="2025-11-01T00:23:00.753771046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.550982235s" Nov 1 00:23:00.753921 containerd[1713]: time="2025-11-01T00:23:00.753810747Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:23:00.756902 containerd[1713]: time="2025-11-01T00:23:00.756866985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:23:00.776959 containerd[1713]: time="2025-11-01T00:23:00.776896934Z" level=info msg="CreateContainer within sandbox \"c2d21d8c23889f5ed1d19469d8784151249ccd759c91774547bd2b8e6249fc2c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:23:00.825511 containerd[1713]: time="2025-11-01T00:23:00.825451938Z" level=info msg="CreateContainer within sandbox \"c2d21d8c23889f5ed1d19469d8784151249ccd759c91774547bd2b8e6249fc2c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f8a11469e63a8b611a99452473efd35bd298a1f0d98756e069f517a3dcb1907f\"" Nov 1 00:23:00.826149 containerd[1713]: time="2025-11-01T00:23:00.826097746Z" level=info msg="StartContainer for \"f8a11469e63a8b611a99452473efd35bd298a1f0d98756e069f517a3dcb1907f\"" Nov 1 00:23:00.865681 systemd[1]: Started cri-containerd-f8a11469e63a8b611a99452473efd35bd298a1f0d98756e069f517a3dcb1907f.scope - libcontainer container f8a11469e63a8b611a99452473efd35bd298a1f0d98756e069f517a3dcb1907f. Nov 1 00:23:00.916384 containerd[1713]: time="2025-11-01T00:23:00.916342369Z" level=info msg="StartContainer for \"f8a11469e63a8b611a99452473efd35bd298a1f0d98756e069f517a3dcb1907f\" returns successfully" Nov 1 00:23:01.247764 kubelet[3194]: E1101 00:23:01.247599 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.247764 kubelet[3194]: W1101 00:23:01.247625 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.247764 kubelet[3194]: E1101 00:23:01.247650 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.249123 kubelet[3194]: E1101 00:23:01.248431 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.249123 kubelet[3194]: W1101 00:23:01.248447 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.249123 kubelet[3194]: E1101 00:23:01.248464 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.249123 kubelet[3194]: E1101 00:23:01.248790 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.249123 kubelet[3194]: W1101 00:23:01.248803 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.249123 kubelet[3194]: E1101 00:23:01.248817 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.250849 kubelet[3194]: E1101 00:23:01.250707 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.250849 kubelet[3194]: W1101 00:23:01.250722 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.250849 kubelet[3194]: E1101 00:23:01.250735 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.251278 kubelet[3194]: E1101 00:23:01.251015 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.251278 kubelet[3194]: W1101 00:23:01.251027 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.251278 kubelet[3194]: E1101 00:23:01.251040 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.251563 kubelet[3194]: E1101 00:23:01.251429 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.251563 kubelet[3194]: W1101 00:23:01.251443 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.251563 kubelet[3194]: E1101 00:23:01.251456 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.251982 kubelet[3194]: E1101 00:23:01.251861 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.251982 kubelet[3194]: W1101 00:23:01.251875 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.251982 kubelet[3194]: E1101 00:23:01.251888 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.252338 kubelet[3194]: E1101 00:23:01.252205 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.252338 kubelet[3194]: W1101 00:23:01.252218 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.252338 kubelet[3194]: E1101 00:23:01.252231 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.252680 kubelet[3194]: E1101 00:23:01.252604 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.252680 kubelet[3194]: W1101 00:23:01.252619 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.252680 kubelet[3194]: E1101 00:23:01.252634 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.253201 kubelet[3194]: E1101 00:23:01.253011 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.253201 kubelet[3194]: W1101 00:23:01.253024 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.253201 kubelet[3194]: E1101 00:23:01.253036 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.253484 kubelet[3194]: E1101 00:23:01.253408 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.253484 kubelet[3194]: W1101 00:23:01.253421 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.253484 kubelet[3194]: E1101 00:23:01.253434 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.255946 kubelet[3194]: E1101 00:23:01.255742 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.255946 kubelet[3194]: W1101 00:23:01.255757 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.255946 kubelet[3194]: E1101 00:23:01.255770 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.256287 kubelet[3194]: E1101 00:23:01.256161 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.256287 kubelet[3194]: W1101 00:23:01.256174 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.256287 kubelet[3194]: E1101 00:23:01.256186 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.256636 kubelet[3194]: E1101 00:23:01.256508 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.256636 kubelet[3194]: W1101 00:23:01.256521 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.256636 kubelet[3194]: E1101 00:23:01.256579 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.257059 kubelet[3194]: E1101 00:23:01.256971 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.257059 kubelet[3194]: W1101 00:23:01.256986 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.257059 kubelet[3194]: E1101 00:23:01.256999 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.260432 kubelet[3194]: E1101 00:23:01.260376 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.260432 kubelet[3194]: W1101 00:23:01.260393 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.260432 kubelet[3194]: E1101 00:23:01.260408 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.261099 kubelet[3194]: E1101 00:23:01.260957 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.261099 kubelet[3194]: W1101 00:23:01.260972 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.261099 kubelet[3194]: E1101 00:23:01.260988 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.261544 kubelet[3194]: E1101 00:23:01.261425 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.261544 kubelet[3194]: W1101 00:23:01.261438 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.261544 kubelet[3194]: E1101 00:23:01.261451 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.262067 kubelet[3194]: E1101 00:23:01.261913 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.262067 kubelet[3194]: W1101 00:23:01.261927 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.262067 kubelet[3194]: E1101 00:23:01.261940 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.262466 kubelet[3194]: E1101 00:23:01.262359 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.262466 kubelet[3194]: W1101 00:23:01.262373 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.262466 kubelet[3194]: E1101 00:23:01.262386 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.264801 kubelet[3194]: E1101 00:23:01.264622 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.264801 kubelet[3194]: W1101 00:23:01.264639 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.264801 kubelet[3194]: E1101 00:23:01.264655 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.265171 kubelet[3194]: E1101 00:23:01.265089 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.265171 kubelet[3194]: W1101 00:23:01.265103 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.265171 kubelet[3194]: E1101 00:23:01.265116 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.265626 kubelet[3194]: E1101 00:23:01.265512 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.265626 kubelet[3194]: W1101 00:23:01.265524 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.265626 kubelet[3194]: E1101 00:23:01.265548 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.266024 kubelet[3194]: E1101 00:23:01.265938 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.266024 kubelet[3194]: W1101 00:23:01.265951 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.266024 kubelet[3194]: E1101 00:23:01.265964 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.266469 kubelet[3194]: E1101 00:23:01.266363 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.266469 kubelet[3194]: W1101 00:23:01.266376 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.266469 kubelet[3194]: E1101 00:23:01.266389 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.266897 kubelet[3194]: E1101 00:23:01.266813 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.266897 kubelet[3194]: W1101 00:23:01.266827 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.266897 kubelet[3194]: E1101 00:23:01.266839 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.269663 kubelet[3194]: E1101 00:23:01.267386 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.269663 kubelet[3194]: W1101 00:23:01.267400 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.269663 kubelet[3194]: E1101 00:23:01.267412 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.270064 kubelet[3194]: E1101 00:23:01.269974 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.270064 kubelet[3194]: W1101 00:23:01.269990 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.270064 kubelet[3194]: E1101 00:23:01.270003 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.270905 kubelet[3194]: E1101 00:23:01.270889 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.270999 kubelet[3194]: W1101 00:23:01.270985 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.271072 kubelet[3194]: E1101 00:23:01.271060 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.271378 kubelet[3194]: E1101 00:23:01.271364 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.271466 kubelet[3194]: W1101 00:23:01.271452 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.271563 kubelet[3194]: E1101 00:23:01.271548 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.271926 kubelet[3194]: E1101 00:23:01.271885 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.271926 kubelet[3194]: W1101 00:23:01.271899 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.271926 kubelet[3194]: E1101 00:23:01.271912 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.272683 kubelet[3194]: E1101 00:23:01.272332 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.272683 kubelet[3194]: W1101 00:23:01.272346 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.272683 kubelet[3194]: E1101 00:23:01.272361 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.273000 kubelet[3194]: E1101 00:23:01.272985 3194 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:01.273089 kubelet[3194]: W1101 00:23:01.273076 3194 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:01.273161 kubelet[3194]: E1101 00:23:01.273149 3194 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.951888 containerd[1713]: time="2025-11-01T00:23:01.951838651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.955152 containerd[1713]: time="2025-11-01T00:23:01.955012390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:23:01.959324 containerd[1713]: time="2025-11-01T00:23:01.958310231Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.963406 containerd[1713]: time="2025-11-01T00:23:01.962588285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.963406 containerd[1713]: time="2025-11-01T00:23:01.963240193Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.206130605s" Nov 1 00:23:01.963406 containerd[1713]: time="2025-11-01T00:23:01.963276993Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:23:01.970658 containerd[1713]: time="2025-11-01T00:23:01.970625785Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:02.010851 containerd[1713]: time="2025-11-01T00:23:02.010803484Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628\"" Nov 1 00:23:02.013250 containerd[1713]: time="2025-11-01T00:23:02.011661395Z" level=info msg="StartContainer for \"7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628\"" Nov 1 00:23:02.052731 systemd[1]: Started cri-containerd-7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628.scope - libcontainer container 7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628. Nov 1 00:23:02.088614 kubelet[3194]: E1101 00:23:02.088209 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:02.091563 containerd[1713]: time="2025-11-01T00:23:02.089630765Z" level=info msg="StartContainer for \"7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628\" returns successfully" Nov 1 00:23:02.104737 systemd[1]: cri-containerd-7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628.scope: Deactivated successfully. Nov 1 00:23:02.128620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628-rootfs.mount: Deactivated successfully. Nov 1 00:23:02.204303 kubelet[3194]: I1101 00:23:02.204195 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:02.223241 kubelet[3194]: I1101 00:23:02.222279 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c9bfb7dcc-9vztk" podStartSLOduration=2.66970496 podStartE2EDuration="5.222259215s" podCreationTimestamp="2025-11-01 00:22:57 +0000 UTC" firstStartedPulling="2025-11-01 00:22:58.202316305 +0000 UTC m=+22.636436316" lastFinishedPulling="2025-11-01 00:23:00.75487066 +0000 UTC m=+25.188990571" observedRunningTime="2025-11-01 00:23:01.259107133 +0000 UTC m=+25.693227044" watchObservedRunningTime="2025-11-01 00:23:02.222259215 +0000 UTC m=+26.656379126" Nov 1 00:23:03.602556 containerd[1713]: time="2025-11-01T00:23:03.602455785Z" level=info msg="shim disconnected" id=7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628 namespace=k8s.io Nov 1 00:23:03.602556 containerd[1713]: time="2025-11-01T00:23:03.602546586Z" level=warning msg="cleaning up after shim disconnected" id=7c8727f169bc7af8f17acbbbba118c79a2ef07f76116da60485b56c63362e628 namespace=k8s.io Nov 1 00:23:03.602556 containerd[1713]: time="2025-11-01T00:23:03.602560287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:04.091033 kubelet[3194]: E1101 00:23:04.090991 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:04.210323 containerd[1713]: time="2025-11-01T00:23:04.210273647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:06.090719 kubelet[3194]: E1101 00:23:06.090679 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:07.407756 containerd[1713]: time="2025-11-01T00:23:07.407701830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:07.410791 containerd[1713]: time="2025-11-01T00:23:07.410725667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:23:07.414770 containerd[1713]: time="2025-11-01T00:23:07.413901006Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:07.418148 containerd[1713]: time="2025-11-01T00:23:07.418115257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:07.418876 containerd[1713]: time="2025-11-01T00:23:07.418841166Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.208523719s" Nov 1 00:23:07.418983 containerd[1713]: time="2025-11-01T00:23:07.418874167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:23:07.431659 containerd[1713]: time="2025-11-01T00:23:07.431626922Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:07.468917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3391692465.mount: Deactivated successfully. Nov 1 00:23:07.478329 containerd[1713]: time="2025-11-01T00:23:07.478281293Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6\"" Nov 1 00:23:07.480555 containerd[1713]: time="2025-11-01T00:23:07.479114403Z" level=info msg="StartContainer for \"e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6\"" Nov 1 00:23:07.514710 systemd[1]: Started cri-containerd-e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6.scope - libcontainer container e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6. Nov 1 00:23:07.547365 containerd[1713]: time="2025-11-01T00:23:07.547312936Z" level=info msg="StartContainer for \"e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6\" returns successfully" Nov 1 00:23:08.089782 kubelet[3194]: E1101 00:23:08.089098 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:09.174983 containerd[1713]: time="2025-11-01T00:23:09.174927130Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Nov 1 00:23:09.177400 systemd[1]: cri-containerd-e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6.scope: Deactivated successfully. Nov 1 00:23:09.199470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6-rootfs.mount: Deactivated successfully. Nov 1 00:23:09.205589 kubelet[3194]: I1101 00:23:09.205492 3194 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 1 00:23:10.400561 systemd[1]: Created slice kubepods-burstable-poda83589c5_3f06_47b3_8533_6e8d610b7e5a.slice - libcontainer container kubepods-burstable-poda83589c5_3f06_47b3_8533_6e8d610b7e5a.slice. Nov 1 00:23:10.408851 systemd[1]: Created slice kubepods-besteffort-pod763cf2c8_d06c_456e_8d46_4720620695a1.slice - libcontainer container kubepods-besteffort-pod763cf2c8_d06c_456e_8d46_4720620695a1.slice. Nov 1 00:23:10.444250 kubelet[3194]: I1101 00:23:10.419474 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a83589c5-3f06-47b3-8533-6e8d610b7e5a-config-volume\") pod \"coredns-66bc5c9577-k5c5g\" (UID: \"a83589c5-3f06-47b3-8533-6e8d610b7e5a\") " pod="kube-system/coredns-66bc5c9577-k5c5g" Nov 1 00:23:10.444250 kubelet[3194]: I1101 00:23:10.419501 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vj59\" (UniqueName: \"kubernetes.io/projected/a83589c5-3f06-47b3-8533-6e8d610b7e5a-kube-api-access-2vj59\") pod \"coredns-66bc5c9577-k5c5g\" (UID: \"a83589c5-3f06-47b3-8533-6e8d610b7e5a\") " pod="kube-system/coredns-66bc5c9577-k5c5g" Nov 1 00:23:10.447307 containerd[1713]: time="2025-11-01T00:23:10.447088779Z" level=info msg="shim disconnected" id=e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6 namespace=k8s.io Nov 1 00:23:10.448567 containerd[1713]: time="2025-11-01T00:23:10.447273881Z" level=warning msg="cleaning up after shim disconnected" id=e4fafe6b90514954ad71991b350f80a9d476bda76dd79c34b681590702da73a6 namespace=k8s.io Nov 1 00:23:10.448567 containerd[1713]: time="2025-11-01T00:23:10.447399783Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 1 00:23:10.452662 containerd[1713]: time="2025-11-01T00:23:10.452628747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trnvf,Uid:763cf2c8-d06c-456e-8d46-4720620695a1,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:10.462650 systemd[1]: Created slice kubepods-burstable-pod79bd8e75_9a83_49b2_ac1b_70aed374e2d6.slice - libcontainer container kubepods-burstable-pod79bd8e75_9a83_49b2_ac1b_70aed374e2d6.slice. Nov 1 00:23:10.473321 systemd[1]: Created slice kubepods-besteffort-pod22127e05_14d3_460b_b2fa_c64a0fa29218.slice - libcontainer container kubepods-besteffort-pod22127e05_14d3_460b_b2fa_c64a0fa29218.slice. Nov 1 00:23:10.490565 containerd[1713]: time="2025-11-01T00:23:10.488634787Z" level=warning msg="cleanup warnings time=\"2025-11-01T00:23:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 1 00:23:10.522748 systemd[1]: Created slice kubepods-besteffort-pod389b7b2a_9963_4ce4_a0c8_a7f3fe88a917.slice - libcontainer container kubepods-besteffort-pod389b7b2a_9963_4ce4_a0c8_a7f3fe88a917.slice. Nov 1 00:23:10.538506 systemd[1]: Created slice kubepods-besteffort-podd8da81c8_f689_4aff_8f06_3115f31a2434.slice - libcontainer container kubepods-besteffort-podd8da81c8_f689_4aff_8f06_3115f31a2434.slice. Nov 1 00:23:10.547660 systemd[1]: Created slice kubepods-besteffort-podfcbbf525_3d8d_4b5d_819a_2cf75639fa8a.slice - libcontainer container kubepods-besteffort-podfcbbf525_3d8d_4b5d_819a_2cf75639fa8a.slice. Nov 1 00:23:10.556361 systemd[1]: Created slice kubepods-besteffort-pod1e7f5e79_08c7_4630_a4c4_82d9824187a0.slice - libcontainer container kubepods-besteffort-pod1e7f5e79_08c7_4630_a4c4_82d9824187a0.slice. Nov 1 00:23:10.621214 kubelet[3194]: I1101 00:23:10.621163 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/389b7b2a-9963-4ce4-a0c8-a7f3fe88a917-goldmane-key-pair\") pod \"goldmane-7c778bb748-f7h6c\" (UID: \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\") " pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:10.621214 kubelet[3194]: I1101 00:23:10.621218 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1e7f5e79-08c7-4630-a4c4-82d9824187a0-calico-apiserver-certs\") pod \"calico-apiserver-6f9c5c4598-4kvfs\" (UID: \"1e7f5e79-08c7-4630-a4c4-82d9824187a0\") " pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" Nov 1 00:23:10.621434 kubelet[3194]: I1101 00:23:10.621242 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79bd8e75-9a83-49b2-ac1b-70aed374e2d6-config-volume\") pod \"coredns-66bc5c9577-9d267\" (UID: \"79bd8e75-9a83-49b2-ac1b-70aed374e2d6\") " pod="kube-system/coredns-66bc5c9577-9d267" Nov 1 00:23:10.621434 kubelet[3194]: I1101 00:23:10.621266 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pk9gz\" (UniqueName: \"kubernetes.io/projected/d8da81c8-f689-4aff-8f06-3115f31a2434-kube-api-access-pk9gz\") pod \"calico-apiserver-6f9c5c4598-f295m\" (UID: \"d8da81c8-f689-4aff-8f06-3115f31a2434\") " pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" Nov 1 00:23:10.621434 kubelet[3194]: I1101 00:23:10.621287 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/389b7b2a-9963-4ce4-a0c8-a7f3fe88a917-config\") pod \"goldmane-7c778bb748-f7h6c\" (UID: \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\") " pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:10.621434 kubelet[3194]: I1101 00:23:10.621306 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d8da81c8-f689-4aff-8f06-3115f31a2434-calico-apiserver-certs\") pod \"calico-apiserver-6f9c5c4598-f295m\" (UID: \"d8da81c8-f689-4aff-8f06-3115f31a2434\") " pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" Nov 1 00:23:10.621434 kubelet[3194]: I1101 00:23:10.621347 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2sv6\" (UniqueName: \"kubernetes.io/projected/389b7b2a-9963-4ce4-a0c8-a7f3fe88a917-kube-api-access-f2sv6\") pod \"goldmane-7c778bb748-f7h6c\" (UID: \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\") " pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:10.621724 kubelet[3194]: I1101 00:23:10.621365 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv9db\" (UniqueName: \"kubernetes.io/projected/22127e05-14d3-460b-b2fa-c64a0fa29218-kube-api-access-jv9db\") pod \"whisker-768d7d88cf-bc8c7\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " pod="calico-system/whisker-768d7d88cf-bc8c7" Nov 1 00:23:10.621724 kubelet[3194]: I1101 00:23:10.621394 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/389b7b2a-9963-4ce4-a0c8-a7f3fe88a917-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-f7h6c\" (UID: \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\") " pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:10.621724 kubelet[3194]: I1101 00:23:10.621419 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t9bk\" (UniqueName: \"kubernetes.io/projected/1e7f5e79-08c7-4630-a4c4-82d9824187a0-kube-api-access-6t9bk\") pod \"calico-apiserver-6f9c5c4598-4kvfs\" (UID: \"1e7f5e79-08c7-4630-a4c4-82d9824187a0\") " pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" Nov 1 00:23:10.621724 kubelet[3194]: I1101 00:23:10.621441 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcpt7\" (UniqueName: \"kubernetes.io/projected/fcbbf525-3d8d-4b5d-819a-2cf75639fa8a-kube-api-access-xcpt7\") pod \"calico-kube-controllers-d9dc766d8-sj8dp\" (UID: \"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a\") " pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" Nov 1 00:23:10.621724 kubelet[3194]: I1101 00:23:10.621476 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-backend-key-pair\") pod \"whisker-768d7d88cf-bc8c7\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " pod="calico-system/whisker-768d7d88cf-bc8c7" Nov 1 00:23:10.621942 kubelet[3194]: I1101 00:23:10.621501 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcbbf525-3d8d-4b5d-819a-2cf75639fa8a-tigera-ca-bundle\") pod \"calico-kube-controllers-d9dc766d8-sj8dp\" (UID: \"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a\") " pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" Nov 1 00:23:10.621942 kubelet[3194]: I1101 00:23:10.621523 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-ca-bundle\") pod \"whisker-768d7d88cf-bc8c7\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " pod="calico-system/whisker-768d7d88cf-bc8c7" Nov 1 00:23:10.621942 kubelet[3194]: I1101 00:23:10.621570 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5pw\" (UniqueName: \"kubernetes.io/projected/79bd8e75-9a83-49b2-ac1b-70aed374e2d6-kube-api-access-qx5pw\") pod \"coredns-66bc5c9577-9d267\" (UID: \"79bd8e75-9a83-49b2-ac1b-70aed374e2d6\") " pod="kube-system/coredns-66bc5c9577-9d267" Nov 1 00:23:10.625052 containerd[1713]: time="2025-11-01T00:23:10.624976153Z" level=error msg="Failed to destroy network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.627767 containerd[1713]: time="2025-11-01T00:23:10.626806476Z" level=error msg="encountered an error cleaning up failed sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.627767 containerd[1713]: time="2025-11-01T00:23:10.627040478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trnvf,Uid:763cf2c8-d06c-456e-8d46-4720620695a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.628655 kubelet[3194]: E1101 00:23:10.628616 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.629610 kubelet[3194]: E1101 00:23:10.628686 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-trnvf" Nov 1 00:23:10.629610 kubelet[3194]: E1101 00:23:10.628712 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-trnvf" Nov 1 00:23:10.629610 kubelet[3194]: E1101 00:23:10.628775 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:10.631157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9-shm.mount: Deactivated successfully. Nov 1 00:23:10.761126 containerd[1713]: time="2025-11-01T00:23:10.758004479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k5c5g,Uid:a83589c5-3f06-47b3-8533-6e8d610b7e5a,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:10.775944 containerd[1713]: time="2025-11-01T00:23:10.775900498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9d267,Uid:79bd8e75-9a83-49b2-ac1b-70aed374e2d6,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:10.812203 containerd[1713]: time="2025-11-01T00:23:10.811812337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768d7d88cf-bc8c7,Uid:22127e05-14d3-460b-b2fa-c64a0fa29218,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:10.837602 containerd[1713]: time="2025-11-01T00:23:10.837195247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f7h6c,Uid:389b7b2a-9963-4ce4-a0c8-a7f3fe88a917,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:10.849180 containerd[1713]: time="2025-11-01T00:23:10.848741088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-f295m,Uid:d8da81c8-f689-4aff-8f06-3115f31a2434,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:10.862644 containerd[1713]: time="2025-11-01T00:23:10.862600858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9dc766d8-sj8dp,Uid:fcbbf525-3d8d-4b5d-819a-2cf75639fa8a,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:10.875265 containerd[1713]: time="2025-11-01T00:23:10.875215712Z" level=error msg="Failed to destroy network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.876514 containerd[1713]: time="2025-11-01T00:23:10.876257025Z" level=error msg="encountered an error cleaning up failed sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.876514 containerd[1713]: time="2025-11-01T00:23:10.876326725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k5c5g,Uid:a83589c5-3f06-47b3-8533-6e8d610b7e5a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.877912 kubelet[3194]: E1101 00:23:10.876794 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.877912 kubelet[3194]: E1101 00:23:10.876856 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k5c5g" Nov 1 00:23:10.877912 kubelet[3194]: E1101 00:23:10.876881 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-k5c5g" Nov 1 00:23:10.878118 kubelet[3194]: E1101 00:23:10.876940 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-k5c5g_kube-system(a83589c5-3f06-47b3-8533-6e8d610b7e5a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-k5c5g_kube-system(a83589c5-3f06-47b3-8533-6e8d610b7e5a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k5c5g" podUID="a83589c5-3f06-47b3-8533-6e8d610b7e5a" Nov 1 00:23:10.880391 containerd[1713]: time="2025-11-01T00:23:10.880286374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-4kvfs,Uid:1e7f5e79-08c7-4630-a4c4-82d9824187a0,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:10.924782 containerd[1713]: time="2025-11-01T00:23:10.924627716Z" level=error msg="Failed to destroy network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.925056 containerd[1713]: time="2025-11-01T00:23:10.924959520Z" level=error msg="encountered an error cleaning up failed sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.925056 containerd[1713]: time="2025-11-01T00:23:10.925021921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9d267,Uid:79bd8e75-9a83-49b2-ac1b-70aed374e2d6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.925278 kubelet[3194]: E1101 00:23:10.925242 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.925348 kubelet[3194]: E1101 00:23:10.925305 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9d267" Nov 1 00:23:10.925348 kubelet[3194]: E1101 00:23:10.925335 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9d267" Nov 1 00:23:10.925459 kubelet[3194]: E1101 00:23:10.925397 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9d267_kube-system(79bd8e75-9a83-49b2-ac1b-70aed374e2d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9d267_kube-system(79bd8e75-9a83-49b2-ac1b-70aed374e2d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9d267" podUID="79bd8e75-9a83-49b2-ac1b-70aed374e2d6" Nov 1 00:23:10.963660 containerd[1713]: time="2025-11-01T00:23:10.963608492Z" level=error msg="Failed to destroy network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.963976 containerd[1713]: time="2025-11-01T00:23:10.963932096Z" level=error msg="encountered an error cleaning up failed sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.964073 containerd[1713]: time="2025-11-01T00:23:10.963996397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-768d7d88cf-bc8c7,Uid:22127e05-14d3-460b-b2fa-c64a0fa29218,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.964262 kubelet[3194]: E1101 00:23:10.964220 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:10.964340 kubelet[3194]: E1101 00:23:10.964297 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768d7d88cf-bc8c7" Nov 1 00:23:10.964340 kubelet[3194]: E1101 00:23:10.964326 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-768d7d88cf-bc8c7" Nov 1 00:23:10.964424 kubelet[3194]: E1101 00:23:10.964394 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-768d7d88cf-bc8c7_calico-system(22127e05-14d3-460b-b2fa-c64a0fa29218)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-768d7d88cf-bc8c7_calico-system(22127e05-14d3-460b-b2fa-c64a0fa29218)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-768d7d88cf-bc8c7" podUID="22127e05-14d3-460b-b2fa-c64a0fa29218" Nov 1 00:23:11.079219 containerd[1713]: time="2025-11-01T00:23:11.079016403Z" level=error msg="Failed to destroy network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.079585 containerd[1713]: time="2025-11-01T00:23:11.079417108Z" level=error msg="encountered an error cleaning up failed sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.079585 containerd[1713]: time="2025-11-01T00:23:11.079486608Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f7h6c,Uid:389b7b2a-9963-4ce4-a0c8-a7f3fe88a917,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.079890 kubelet[3194]: E1101 00:23:11.079799 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.080229 kubelet[3194]: E1101 00:23:11.080114 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:11.080229 kubelet[3194]: E1101 00:23:11.080164 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-f7h6c" Nov 1 00:23:11.080840 kubelet[3194]: E1101 00:23:11.080344 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:11.127770 containerd[1713]: time="2025-11-01T00:23:11.127553696Z" level=error msg="Failed to destroy network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.129260 containerd[1713]: time="2025-11-01T00:23:11.128595109Z" level=error msg="encountered an error cleaning up failed sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.129260 containerd[1713]: time="2025-11-01T00:23:11.128816811Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-f295m,Uid:d8da81c8-f689-4aff-8f06-3115f31a2434,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.129450 kubelet[3194]: E1101 00:23:11.129266 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.129450 kubelet[3194]: E1101 00:23:11.129324 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" Nov 1 00:23:11.129450 kubelet[3194]: E1101 00:23:11.129348 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" Nov 1 00:23:11.129613 kubelet[3194]: E1101 00:23:11.129406 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:11.142155 containerd[1713]: time="2025-11-01T00:23:11.142113974Z" level=error msg="Failed to destroy network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.142464 containerd[1713]: time="2025-11-01T00:23:11.142432978Z" level=error msg="encountered an error cleaning up failed sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.142576 containerd[1713]: time="2025-11-01T00:23:11.142490679Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9dc766d8-sj8dp,Uid:fcbbf525-3d8d-4b5d-819a-2cf75639fa8a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.143292 containerd[1713]: time="2025-11-01T00:23:11.143259688Z" level=error msg="Failed to destroy network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.144087 containerd[1713]: time="2025-11-01T00:23:11.143510291Z" level=error msg="encountered an error cleaning up failed sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.144087 containerd[1713]: time="2025-11-01T00:23:11.143591892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-4kvfs,Uid:1e7f5e79-08c7-4630-a4c4-82d9824187a0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.144262 kubelet[3194]: E1101 00:23:11.143739 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.144262 kubelet[3194]: E1101 00:23:11.143786 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" Nov 1 00:23:11.144262 kubelet[3194]: E1101 00:23:11.143806 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" Nov 1 00:23:11.144262 kubelet[3194]: E1101 00:23:11.143739 3194 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.144448 kubelet[3194]: E1101 00:23:11.143849 3194 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" Nov 1 00:23:11.144448 kubelet[3194]: E1101 00:23:11.143867 3194 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" Nov 1 00:23:11.144448 kubelet[3194]: E1101 00:23:11.143860 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:11.144622 kubelet[3194]: E1101 00:23:11.143926 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:11.232169 kubelet[3194]: I1101 00:23:11.232120 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:11.233244 containerd[1713]: time="2025-11-01T00:23:11.232997685Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:11.234073 containerd[1713]: time="2025-11-01T00:23:11.233394790Z" level=info msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" Nov 1 00:23:11.234073 containerd[1713]: time="2025-11-01T00:23:11.233829395Z" level=info msg="Ensure that sandbox c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa in task-service has been cleanup successfully" Nov 1 00:23:11.241895 kubelet[3194]: I1101 00:23:11.241852 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:11.243621 containerd[1713]: time="2025-11-01T00:23:11.243406112Z" level=info msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" Nov 1 00:23:11.244824 containerd[1713]: time="2025-11-01T00:23:11.243627115Z" level=info msg="Ensure that sandbox 3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b in task-service has been cleanup successfully" Nov 1 00:23:11.252978 kubelet[3194]: I1101 00:23:11.252950 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:11.256928 containerd[1713]: time="2025-11-01T00:23:11.255704262Z" level=info msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" Nov 1 00:23:11.257714 containerd[1713]: time="2025-11-01T00:23:11.257680486Z" level=info msg="Ensure that sandbox d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b in task-service has been cleanup successfully" Nov 1 00:23:11.259787 kubelet[3194]: I1101 00:23:11.259349 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:11.260102 containerd[1713]: time="2025-11-01T00:23:11.260076516Z" level=info msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" Nov 1 00:23:11.261210 containerd[1713]: time="2025-11-01T00:23:11.261182329Z" level=info msg="Ensure that sandbox c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456 in task-service has been cleanup successfully" Nov 1 00:23:11.262777 kubelet[3194]: I1101 00:23:11.262755 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:11.267750 containerd[1713]: time="2025-11-01T00:23:11.267715909Z" level=info msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" Nov 1 00:23:11.271184 containerd[1713]: time="2025-11-01T00:23:11.271149751Z" level=info msg="Ensure that sandbox 81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7 in task-service has been cleanup successfully" Nov 1 00:23:11.274686 kubelet[3194]: I1101 00:23:11.274232 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:11.275728 containerd[1713]: time="2025-11-01T00:23:11.275699607Z" level=info msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" Nov 1 00:23:11.277146 containerd[1713]: time="2025-11-01T00:23:11.277113124Z" level=info msg="Ensure that sandbox 958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783 in task-service has been cleanup successfully" Nov 1 00:23:11.285025 kubelet[3194]: I1101 00:23:11.284988 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:11.285802 containerd[1713]: time="2025-11-01T00:23:11.285766730Z" level=info msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" Nov 1 00:23:11.285992 containerd[1713]: time="2025-11-01T00:23:11.285960532Z" level=info msg="Ensure that sandbox febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4 in task-service has been cleanup successfully" Nov 1 00:23:11.292841 kubelet[3194]: I1101 00:23:11.292808 3194 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:11.298145 containerd[1713]: time="2025-11-01T00:23:11.297265470Z" level=info msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" Nov 1 00:23:11.300256 containerd[1713]: time="2025-11-01T00:23:11.299956303Z" level=info msg="Ensure that sandbox 337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9 in task-service has been cleanup successfully" Nov 1 00:23:11.379212 containerd[1713]: time="2025-11-01T00:23:11.378970169Z" level=error msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" failed" error="failed to destroy network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.381380 kubelet[3194]: E1101 00:23:11.381329 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:11.381598 kubelet[3194]: E1101 00:23:11.381404 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4"} Nov 1 00:23:11.381598 kubelet[3194]: E1101 00:23:11.381491 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a83589c5-3f06-47b3-8533-6e8d610b7e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.381878 kubelet[3194]: E1101 00:23:11.381741 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a83589c5-3f06-47b3-8533-6e8d610b7e5a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-k5c5g" podUID="a83589c5-3f06-47b3-8533-6e8d610b7e5a" Nov 1 00:23:11.384525 containerd[1713]: time="2025-11-01T00:23:11.384463036Z" level=error msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" failed" error="failed to destroy network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.385038 kubelet[3194]: E1101 00:23:11.384807 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:11.385130 kubelet[3194]: E1101 00:23:11.385055 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9"} Nov 1 00:23:11.385130 kubelet[3194]: E1101 00:23:11.385089 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"763cf2c8-d06c-456e-8d46-4720620695a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.385261 kubelet[3194]: E1101 00:23:11.385122 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"763cf2c8-d06c-456e-8d46-4720620695a1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:11.417497 containerd[1713]: time="2025-11-01T00:23:11.417040734Z" level=error msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" failed" error="failed to destroy network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.417497 containerd[1713]: time="2025-11-01T00:23:11.417269337Z" level=error msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" failed" error="failed to destroy network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.417718 kubelet[3194]: E1101 00:23:11.417489 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:11.418259 kubelet[3194]: E1101 00:23:11.418078 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:11.418259 kubelet[3194]: E1101 00:23:11.418123 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa"} Nov 1 00:23:11.418259 kubelet[3194]: E1101 00:23:11.418161 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1e7f5e79-08c7-4630-a4c4-82d9824187a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.418259 kubelet[3194]: E1101 00:23:11.418205 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1e7f5e79-08c7-4630-a4c4-82d9824187a0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:11.418259 kubelet[3194]: E1101 00:23:11.418253 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7"} Nov 1 00:23:11.418685 kubelet[3194]: E1101 00:23:11.418284 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d8da81c8-f689-4aff-8f06-3115f31a2434\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.418685 kubelet[3194]: E1101 00:23:11.418308 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d8da81c8-f689-4aff-8f06-3115f31a2434\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:11.420977 containerd[1713]: time="2025-11-01T00:23:11.420216273Z" level=error msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" failed" error="failed to destroy network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.421411 kubelet[3194]: E1101 00:23:11.421338 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:11.421411 kubelet[3194]: E1101 00:23:11.421387 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456"} Nov 1 00:23:11.421638 kubelet[3194]: E1101 00:23:11.421420 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.421638 kubelet[3194]: E1101 00:23:11.421452 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:11.428498 containerd[1713]: time="2025-11-01T00:23:11.428447374Z" level=error msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" failed" error="failed to destroy network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.428838 kubelet[3194]: E1101 00:23:11.428799 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:11.428963 kubelet[3194]: E1101 00:23:11.428842 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b"} Nov 1 00:23:11.428963 kubelet[3194]: E1101 00:23:11.428884 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.428963 kubelet[3194]: E1101 00:23:11.428916 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:11.430087 containerd[1713]: time="2025-11-01T00:23:11.430051993Z" level=error msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" failed" error="failed to destroy network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.430441 kubelet[3194]: E1101 00:23:11.430410 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:11.430619 kubelet[3194]: E1101 00:23:11.430445 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b"} Nov 1 00:23:11.430619 kubelet[3194]: E1101 00:23:11.430476 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79bd8e75-9a83-49b2-ac1b-70aed374e2d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.430619 kubelet[3194]: E1101 00:23:11.430505 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79bd8e75-9a83-49b2-ac1b-70aed374e2d6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9d267" podUID="79bd8e75-9a83-49b2-ac1b-70aed374e2d6" Nov 1 00:23:11.435102 containerd[1713]: time="2025-11-01T00:23:11.435066655Z" level=error msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" failed" error="failed to destroy network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:11.435374 kubelet[3194]: E1101 00:23:11.435217 3194 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:11.435374 kubelet[3194]: E1101 00:23:11.435252 3194 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783"} Nov 1 00:23:11.435374 kubelet[3194]: E1101 00:23:11.435272 3194 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"22127e05-14d3-460b-b2fa-c64a0fa29218\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 1 00:23:11.435374 kubelet[3194]: E1101 00:23:11.435294 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"22127e05-14d3-460b-b2fa-c64a0fa29218\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-768d7d88cf-bc8c7" podUID="22127e05-14d3-460b-b2fa-c64a0fa29218" Nov 1 00:23:16.976132 kubelet[3194]: I1101 00:23:16.975867 3194 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:17.968501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3940188156.mount: Deactivated successfully. Nov 1 00:23:18.013229 containerd[1713]: time="2025-11-01T00:23:18.013168381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.016101 containerd[1713]: time="2025-11-01T00:23:18.016031015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:23:18.019599 containerd[1713]: time="2025-11-01T00:23:18.019503255Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.023703 containerd[1713]: time="2025-11-01T00:23:18.023666603Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:18.025680 containerd[1713]: time="2025-11-01T00:23:18.025428724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.792267338s" Nov 1 00:23:18.025680 containerd[1713]: time="2025-11-01T00:23:18.025471524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:23:18.047660 containerd[1713]: time="2025-11-01T00:23:18.047618582Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:18.095729 containerd[1713]: time="2025-11-01T00:23:18.095686542Z" level=info msg="CreateContainer within sandbox \"6d0b6ba5f5f55505071bf7ab03b7b5da45fbace6c2d309fc2e4e32b0770e7153\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77\"" Nov 1 00:23:18.096437 containerd[1713]: time="2025-11-01T00:23:18.096402150Z" level=info msg="StartContainer for \"dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77\"" Nov 1 00:23:18.122700 systemd[1]: Started cri-containerd-dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77.scope - libcontainer container dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77. Nov 1 00:23:18.164580 containerd[1713]: time="2025-11-01T00:23:18.164454343Z" level=info msg="StartContainer for \"dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77\" returns successfully" Nov 1 00:23:18.343446 kubelet[3194]: I1101 00:23:18.343299 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-jg4jd" podStartSLOduration=1.62147197 podStartE2EDuration="21.343257325s" podCreationTimestamp="2025-11-01 00:22:57 +0000 UTC" firstStartedPulling="2025-11-01 00:22:58.307372712 +0000 UTC m=+22.741492723" lastFinishedPulling="2025-11-01 00:23:18.029158167 +0000 UTC m=+42.463278078" observedRunningTime="2025-11-01 00:23:18.341723907 +0000 UTC m=+42.775843918" watchObservedRunningTime="2025-11-01 00:23:18.343257325 +0000 UTC m=+42.777377236" Nov 1 00:23:18.689463 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:18.689623 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:18.810038 containerd[1713]: time="2025-11-01T00:23:18.809481154Z" level=info msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.905 [INFO][4448] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.906 [INFO][4448] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" iface="eth0" netns="/var/run/netns/cni-61b5dc22-dfe3-8655-8b7e-d8a4a7bf9608" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.906 [INFO][4448] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" iface="eth0" netns="/var/run/netns/cni-61b5dc22-dfe3-8655-8b7e-d8a4a7bf9608" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.906 [INFO][4448] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" iface="eth0" netns="/var/run/netns/cni-61b5dc22-dfe3-8655-8b7e-d8a4a7bf9608" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.907 [INFO][4448] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.907 [INFO][4448] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.950 [INFO][4455] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.951 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.951 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.959 [WARNING][4455] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.959 [INFO][4455] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.960 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:18.969754 containerd[1713]: 2025-11-01 00:23:18.965 [INFO][4448] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:18.971297 containerd[1713]: time="2025-11-01T00:23:18.970644531Z" level=info msg="TearDown network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" successfully" Nov 1 00:23:18.971297 containerd[1713]: time="2025-11-01T00:23:18.970682831Z" level=info msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" returns successfully" Nov 1 00:23:18.979678 systemd[1]: run-netns-cni\x2d61b5dc22\x2ddfe3\x2d8655\x2d8b7e\x2dd8a4a7bf9608.mount: Deactivated successfully. Nov 1 00:23:19.085072 kubelet[3194]: I1101 00:23:19.084603 3194 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv9db\" (UniqueName: \"kubernetes.io/projected/22127e05-14d3-460b-b2fa-c64a0fa29218-kube-api-access-jv9db\") pod \"22127e05-14d3-460b-b2fa-c64a0fa29218\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " Nov 1 00:23:19.085072 kubelet[3194]: I1101 00:23:19.084670 3194 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-backend-key-pair\") pod \"22127e05-14d3-460b-b2fa-c64a0fa29218\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " Nov 1 00:23:19.086814 kubelet[3194]: I1101 00:23:19.086780 3194 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-ca-bundle\") pod \"22127e05-14d3-460b-b2fa-c64a0fa29218\" (UID: \"22127e05-14d3-460b-b2fa-c64a0fa29218\") " Nov 1 00:23:19.087729 kubelet[3194]: I1101 00:23:19.087669 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "22127e05-14d3-460b-b2fa-c64a0fa29218" (UID: "22127e05-14d3-460b-b2fa-c64a0fa29218"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:19.092046 systemd[1]: var-lib-kubelet-pods-22127e05\x2d14d3\x2d460b\x2db2fa\x2dc64a0fa29218-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djv9db.mount: Deactivated successfully. Nov 1 00:23:19.092315 systemd[1]: var-lib-kubelet-pods-22127e05\x2d14d3\x2d460b\x2db2fa\x2dc64a0fa29218-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:19.092706 kubelet[3194]: I1101 00:23:19.092645 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22127e05-14d3-460b-b2fa-c64a0fa29218-kube-api-access-jv9db" (OuterVolumeSpecName: "kube-api-access-jv9db") pod "22127e05-14d3-460b-b2fa-c64a0fa29218" (UID: "22127e05-14d3-460b-b2fa-c64a0fa29218"). InnerVolumeSpecName "kube-api-access-jv9db". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:19.092706 kubelet[3194]: I1101 00:23:19.092673 3194 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "22127e05-14d3-460b-b2fa-c64a0fa29218" (UID: "22127e05-14d3-460b-b2fa-c64a0fa29218"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:19.187785 kubelet[3194]: I1101 00:23:19.187733 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-ca-bundle\") on node \"ci-4081.3.6-n-534d15dd10\" DevicePath \"\"" Nov 1 00:23:19.187785 kubelet[3194]: I1101 00:23:19.187774 3194 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jv9db\" (UniqueName: \"kubernetes.io/projected/22127e05-14d3-460b-b2fa-c64a0fa29218-kube-api-access-jv9db\") on node \"ci-4081.3.6-n-534d15dd10\" DevicePath \"\"" Nov 1 00:23:19.187785 kubelet[3194]: I1101 00:23:19.187789 3194 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/22127e05-14d3-460b-b2fa-c64a0fa29218-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-534d15dd10\" DevicePath \"\"" Nov 1 00:23:19.332680 systemd[1]: Removed slice kubepods-besteffort-pod22127e05_14d3_460b_b2fa_c64a0fa29218.slice - libcontainer container kubepods-besteffort-pod22127e05_14d3_460b_b2fa_c64a0fa29218.slice. Nov 1 00:23:19.354950 systemd[1]: run-containerd-runc-k8s.io-dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77-runc.GARfhJ.mount: Deactivated successfully. Nov 1 00:23:19.456824 systemd[1]: Created slice kubepods-besteffort-pod729ae25b_84a0_42aa_9bbf_32506f51f3c1.slice - libcontainer container kubepods-besteffort-pod729ae25b_84a0_42aa_9bbf_32506f51f3c1.slice. Nov 1 00:23:19.490491 kubelet[3194]: I1101 00:23:19.490443 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/729ae25b-84a0-42aa-9bbf-32506f51f3c1-whisker-backend-key-pair\") pod \"whisker-5c6f5f86c9-hxs55\" (UID: \"729ae25b-84a0-42aa-9bbf-32506f51f3c1\") " pod="calico-system/whisker-5c6f5f86c9-hxs55" Nov 1 00:23:19.490491 kubelet[3194]: I1101 00:23:19.490495 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/729ae25b-84a0-42aa-9bbf-32506f51f3c1-whisker-ca-bundle\") pod \"whisker-5c6f5f86c9-hxs55\" (UID: \"729ae25b-84a0-42aa-9bbf-32506f51f3c1\") " pod="calico-system/whisker-5c6f5f86c9-hxs55" Nov 1 00:23:19.490960 kubelet[3194]: I1101 00:23:19.490521 3194 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpj7s\" (UniqueName: \"kubernetes.io/projected/729ae25b-84a0-42aa-9bbf-32506f51f3c1-kube-api-access-mpj7s\") pod \"whisker-5c6f5f86c9-hxs55\" (UID: \"729ae25b-84a0-42aa-9bbf-32506f51f3c1\") " pod="calico-system/whisker-5c6f5f86c9-hxs55" Nov 1 00:23:19.770091 containerd[1713]: time="2025-11-01T00:23:19.770049440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6f5f86c9-hxs55,Uid:729ae25b-84a0-42aa-9bbf-32506f51f3c1,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:19.926852 systemd-networkd[1344]: calif5dd0479945: Link UP Nov 1 00:23:19.927170 systemd-networkd[1344]: calif5dd0479945: Gained carrier Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.822 [INFO][4501] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.833 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0 whisker-5c6f5f86c9- calico-system 729ae25b-84a0-42aa-9bbf-32506f51f3c1 895 0 2025-11-01 00:23:19 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c6f5f86c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 whisker-5c6f5f86c9-hxs55 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif5dd0479945 [] [] }} ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.833 [INFO][4501] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.857 [INFO][4513] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" HandleID="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.857 [INFO][4513] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" HandleID="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"whisker-5c6f5f86c9-hxs55", "timestamp":"2025-11-01 00:23:19.857298956 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.857 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.857 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.857 [INFO][4513] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.863 [INFO][4513] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.867 [INFO][4513] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.872 [INFO][4513] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.874 [INFO][4513] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.875 [INFO][4513] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.875 [INFO][4513] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.877 [INFO][4513] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.881 [INFO][4513] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.886 [INFO][4513] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.129/26] block=192.168.34.128/26 handle="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.886 [INFO][4513] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.129/26] handle="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.886 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:19.961350 containerd[1713]: 2025-11-01 00:23:19.886 [INFO][4513] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.129/26] IPv6=[] ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" HandleID="k8s-pod-network.00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.888 [INFO][4501] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0", GenerateName:"whisker-5c6f5f86c9-", Namespace:"calico-system", SelfLink:"", UID:"729ae25b-84a0-42aa-9bbf-32506f51f3c1", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6f5f86c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"whisker-5c6f5f86c9-hxs55", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5dd0479945", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.888 [INFO][4501] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.129/32] ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.888 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif5dd0479945 ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.927 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.928 [INFO][4501] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0", GenerateName:"whisker-5c6f5f86c9-", Namespace:"calico-system", SelfLink:"", UID:"729ae25b-84a0-42aa-9bbf-32506f51f3c1", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c6f5f86c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd", Pod:"whisker-5c6f5f86c9-hxs55", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.34.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif5dd0479945", MAC:"2e:c6:f7:bf:72:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:19.962373 containerd[1713]: 2025-11-01 00:23:19.956 [INFO][4501] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd" Namespace="calico-system" Pod="whisker-5c6f5f86c9-hxs55" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--5c6f5f86c9--hxs55-eth0" Nov 1 00:23:20.015704 containerd[1713]: time="2025-11-01T00:23:20.014885791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:20.015704 containerd[1713]: time="2025-11-01T00:23:20.014961192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:20.015704 containerd[1713]: time="2025-11-01T00:23:20.014984292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:20.015704 containerd[1713]: time="2025-11-01T00:23:20.015076293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:20.062717 systemd[1]: Started cri-containerd-00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd.scope - libcontainer container 00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd. Nov 1 00:23:20.093606 kubelet[3194]: I1101 00:23:20.092215 3194 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22127e05-14d3-460b-b2fa-c64a0fa29218" path="/var/lib/kubelet/pods/22127e05-14d3-460b-b2fa-c64a0fa29218/volumes" Nov 1 00:23:20.220390 containerd[1713]: time="2025-11-01T00:23:20.220219282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c6f5f86c9-hxs55,Uid:729ae25b-84a0-42aa-9bbf-32506f51f3c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"00be3577de1ac6a989e6114fef5fbcc93f379216db70a4e6d898838fa6f8cefd\"" Nov 1 00:23:20.225433 containerd[1713]: time="2025-11-01T00:23:20.225387843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:20.461119 containerd[1713]: time="2025-11-01T00:23:20.460860186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.464456 containerd[1713]: time="2025-11-01T00:23:20.464261728Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:20.464456 containerd[1713]: time="2025-11-01T00:23:20.464305029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:20.464738 kubelet[3194]: E1101 00:23:20.464602 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:20.464738 kubelet[3194]: E1101 00:23:20.464659 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:20.464868 kubelet[3194]: E1101 00:23:20.464759 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.467005 containerd[1713]: time="2025-11-01T00:23:20.466978362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:20.488554 kernel: bpftool[4667]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 1 00:23:20.705906 containerd[1713]: time="2025-11-01T00:23:20.705713854Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:20.708718 containerd[1713]: time="2025-11-01T00:23:20.708561288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:20.708718 containerd[1713]: time="2025-11-01T00:23:20.708666589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:20.710766 kubelet[3194]: E1101 00:23:20.709041 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:20.710766 kubelet[3194]: E1101 00:23:20.709091 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:20.710766 kubelet[3194]: E1101 00:23:20.709174 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:20.711310 kubelet[3194]: E1101 00:23:20.709224 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:23:20.910364 systemd-networkd[1344]: vxlan.calico: Link UP Nov 1 00:23:20.910375 systemd-networkd[1344]: vxlan.calico: Gained carrier Nov 1 00:23:21.101152 systemd-networkd[1344]: calif5dd0479945: Gained IPv6LL Nov 1 00:23:21.333222 kubelet[3194]: E1101 00:23:21.333073 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:23:22.252838 systemd-networkd[1344]: vxlan.calico: Gained IPv6LL Nov 1 00:23:23.090069 containerd[1713]: time="2025-11-01T00:23:23.089124251Z" level=info msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.139 [INFO][4774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.139 [INFO][4774] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" iface="eth0" netns="/var/run/netns/cni-0ce540d9-34f6-65f3-52e2-b316617c41c8" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.140 [INFO][4774] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" iface="eth0" netns="/var/run/netns/cni-0ce540d9-34f6-65f3-52e2-b316617c41c8" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.141 [INFO][4774] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" iface="eth0" netns="/var/run/netns/cni-0ce540d9-34f6-65f3-52e2-b316617c41c8" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.141 [INFO][4774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.141 [INFO][4774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.163 [INFO][4782] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.163 [INFO][4782] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.163 [INFO][4782] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.169 [WARNING][4782] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.169 [INFO][4782] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.170 [INFO][4782] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.172774 containerd[1713]: 2025-11-01 00:23:23.171 [INFO][4774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:23.174602 containerd[1713]: time="2025-11-01T00:23:23.174541176Z" level=info msg="TearDown network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" successfully" Nov 1 00:23:23.174602 containerd[1713]: time="2025-11-01T00:23:23.174588176Z" level=info msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" returns successfully" Nov 1 00:23:23.177723 systemd[1]: run-netns-cni\x2d0ce540d9\x2d34f6\x2d65f3\x2d52e2\x2db316617c41c8.mount: Deactivated successfully. Nov 1 00:23:23.182027 containerd[1713]: time="2025-11-01T00:23:23.181993565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9dc766d8-sj8dp,Uid:fcbbf525-3d8d-4b5d-819a-2cf75639fa8a,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:23.314306 systemd-networkd[1344]: cali7246475627f: Link UP Nov 1 00:23:23.315290 systemd-networkd[1344]: cali7246475627f: Gained carrier Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.253 [INFO][4788] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0 calico-kube-controllers-d9dc766d8- calico-system fcbbf525-3d8d-4b5d-819a-2cf75639fa8a 922 0 2025-11-01 00:22:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:d9dc766d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 calico-kube-controllers-d9dc766d8-sj8dp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7246475627f [] [] }} ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.253 [INFO][4788] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.276 [INFO][4800] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" HandleID="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.277 [INFO][4800] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" HandleID="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c55a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"calico-kube-controllers-d9dc766d8-sj8dp", "timestamp":"2025-11-01 00:23:23.276975405 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.277 [INFO][4800] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.277 [INFO][4800] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.277 [INFO][4800] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.283 [INFO][4800] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.287 [INFO][4800] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.291 [INFO][4800] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.293 [INFO][4800] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.294 [INFO][4800] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.294 [INFO][4800] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.297 [INFO][4800] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3 Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.302 [INFO][4800] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.309 [INFO][4800] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.130/26] block=192.168.34.128/26 handle="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.309 [INFO][4800] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.130/26] handle="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.309 [INFO][4800] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.337882 containerd[1713]: 2025-11-01 00:23:23.309 [INFO][4800] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.130/26] IPv6=[] ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" HandleID="k8s-pod-network.e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.311 [INFO][4788] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0", GenerateName:"calico-kube-controllers-d9dc766d8-", Namespace:"calico-system", SelfLink:"", UID:"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9dc766d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"calico-kube-controllers-d9dc766d8-sj8dp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7246475627f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.311 [INFO][4788] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.130/32] ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.311 [INFO][4788] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7246475627f ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.315 [INFO][4788] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.316 [INFO][4788] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0", GenerateName:"calico-kube-controllers-d9dc766d8-", Namespace:"calico-system", SelfLink:"", UID:"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9dc766d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3", Pod:"calico-kube-controllers-d9dc766d8-sj8dp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7246475627f", MAC:"8a:66:f5:c5:34:2f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.339066 containerd[1713]: 2025-11-01 00:23:23.331 [INFO][4788] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3" Namespace="calico-system" Pod="calico-kube-controllers-d9dc766d8-sj8dp" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:23.364592 containerd[1713]: time="2025-11-01T00:23:23.363338541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:23.364592 containerd[1713]: time="2025-11-01T00:23:23.363416742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:23.364592 containerd[1713]: time="2025-11-01T00:23:23.363568844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:23.364873 containerd[1713]: time="2025-11-01T00:23:23.363692845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:23.391715 systemd[1]: Started cri-containerd-e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3.scope - libcontainer container e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3. Nov 1 00:23:23.437066 containerd[1713]: time="2025-11-01T00:23:23.437023925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-d9dc766d8-sj8dp,Uid:fcbbf525-3d8d-4b5d-819a-2cf75639fa8a,Namespace:calico-system,Attempt:1,} returns sandbox id \"e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3\"" Nov 1 00:23:23.438848 containerd[1713]: time="2025-11-01T00:23:23.438740846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:23.673877 containerd[1713]: time="2025-11-01T00:23:23.673716265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:23.678026 containerd[1713]: time="2025-11-01T00:23:23.677828714Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:23.678026 containerd[1713]: time="2025-11-01T00:23:23.677954216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:23.679517 kubelet[3194]: E1101 00:23:23.678352 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.679517 kubelet[3194]: E1101 00:23:23.678476 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.679517 kubelet[3194]: E1101 00:23:23.678684 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:23.679517 kubelet[3194]: E1101 00:23:23.678854 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:24.091562 containerd[1713]: time="2025-11-01T00:23:24.090062761Z" level=info msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" Nov 1 00:23:24.091562 containerd[1713]: time="2025-11-01T00:23:24.090464365Z" level=info msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" Nov 1 00:23:24.093704 containerd[1713]: time="2025-11-01T00:23:24.093601703Z" level=info msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.174 [INFO][4878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.176 [INFO][4878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" iface="eth0" netns="/var/run/netns/cni-a21cb236-bd53-19f9-4801-559dc5e1498f" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.178 [INFO][4878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" iface="eth0" netns="/var/run/netns/cni-a21cb236-bd53-19f9-4801-559dc5e1498f" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.180 [INFO][4878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" iface="eth0" netns="/var/run/netns/cni-a21cb236-bd53-19f9-4801-559dc5e1498f" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.180 [INFO][4878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.180 [INFO][4878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.233 [INFO][4899] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.234 [INFO][4899] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.234 [INFO][4899] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.250 [WARNING][4899] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.250 [INFO][4899] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.253 [INFO][4899] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.258616 containerd[1713]: 2025-11-01 00:23:24.255 [INFO][4878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:24.262934 containerd[1713]: time="2025-11-01T00:23:24.259717696Z" level=info msg="TearDown network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" successfully" Nov 1 00:23:24.262934 containerd[1713]: time="2025-11-01T00:23:24.262650531Z" level=info msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" returns successfully" Nov 1 00:23:24.269490 systemd[1]: run-netns-cni\x2da21cb236\x2dbd53\x2d19f9\x2d4801\x2d559dc5e1498f.mount: Deactivated successfully. Nov 1 00:23:24.271438 containerd[1713]: time="2025-11-01T00:23:24.270843630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k5c5g,Uid:a83589c5-3f06-47b3-8533-6e8d610b7e5a,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.222 [INFO][4886] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.222 [INFO][4886] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" iface="eth0" netns="/var/run/netns/cni-c6295fd1-8b58-c77a-b0ab-98956cbc2eed" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.223 [INFO][4886] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" iface="eth0" netns="/var/run/netns/cni-c6295fd1-8b58-c77a-b0ab-98956cbc2eed" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.224 [INFO][4886] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" iface="eth0" netns="/var/run/netns/cni-c6295fd1-8b58-c77a-b0ab-98956cbc2eed" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.224 [INFO][4886] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.224 [INFO][4886] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.276 [INFO][4909] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.276 [INFO][4909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.276 [INFO][4909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.292 [WARNING][4909] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.295 [INFO][4909] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.296 [INFO][4909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.302079 containerd[1713]: 2025-11-01 00:23:24.299 [INFO][4886] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:24.303941 containerd[1713]: time="2025-11-01T00:23:24.303857826Z" level=info msg="TearDown network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" successfully" Nov 1 00:23:24.303941 containerd[1713]: time="2025-11-01T00:23:24.303893626Z" level=info msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" returns successfully" Nov 1 00:23:24.311026 systemd[1]: run-netns-cni\x2dc6295fd1\x2d8b58\x2dc77a\x2db0ab\x2d98956cbc2eed.mount: Deactivated successfully. Nov 1 00:23:24.313462 containerd[1713]: time="2025-11-01T00:23:24.313412440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-f295m,Uid:d8da81c8-f689-4aff-8f06-3115f31a2434,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.216 [INFO][4879] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.217 [INFO][4879] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" iface="eth0" netns="/var/run/netns/cni-88eea96c-8afa-d217-1e5f-a93c96631c1f" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.217 [INFO][4879] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" iface="eth0" netns="/var/run/netns/cni-88eea96c-8afa-d217-1e5f-a93c96631c1f" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.218 [INFO][4879] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" iface="eth0" netns="/var/run/netns/cni-88eea96c-8afa-d217-1e5f-a93c96631c1f" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.218 [INFO][4879] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.218 [INFO][4879] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.284 [INFO][4907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.286 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.296 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.305 [WARNING][4907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.305 [INFO][4907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.307 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.315632 containerd[1713]: 2025-11-01 00:23:24.313 [INFO][4879] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:24.319586 containerd[1713]: time="2025-11-01T00:23:24.319518614Z" level=info msg="TearDown network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" successfully" Nov 1 00:23:24.319682 containerd[1713]: time="2025-11-01T00:23:24.319664215Z" level=info msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" returns successfully" Nov 1 00:23:24.321400 systemd[1]: run-netns-cni\x2d88eea96c\x2d8afa\x2dd217\x2d1e5f\x2da93c96631c1f.mount: Deactivated successfully. Nov 1 00:23:24.333055 containerd[1713]: time="2025-11-01T00:23:24.333025576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f7h6c,Uid:389b7b2a-9963-4ce4-a0c8-a7f3fe88a917,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:24.349709 kubelet[3194]: E1101 00:23:24.349346 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:24.578267 systemd-networkd[1344]: calic704c3e0827: Link UP Nov 1 00:23:24.579760 systemd-networkd[1344]: calic704c3e0827: Gained carrier Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.425 [INFO][4922] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0 coredns-66bc5c9577- kube-system a83589c5-3f06-47b3-8533-6e8d610b7e5a 932 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 coredns-66bc5c9577-k5c5g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic704c3e0827 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.426 [INFO][4922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.507 [INFO][4953] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" HandleID="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.509 [INFO][4953] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" HandleID="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d4100), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"coredns-66bc5c9577-k5c5g", "timestamp":"2025-11-01 00:23:24.507328567 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.509 [INFO][4953] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.509 [INFO][4953] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.509 [INFO][4953] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.525 [INFO][4953] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.535 [INFO][4953] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.544 [INFO][4953] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.548 [INFO][4953] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.553 [INFO][4953] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.553 [INFO][4953] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.554 [INFO][4953] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882 Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.562 [INFO][4953] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.568 [INFO][4953] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.131/26] block=192.168.34.128/26 handle="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.568 [INFO][4953] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.131/26] handle="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.568 [INFO][4953] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.595828 containerd[1713]: 2025-11-01 00:23:24.568 [INFO][4953] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.131/26] IPv6=[] ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" HandleID="k8s-pod-network.54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.597132 containerd[1713]: 2025-11-01 00:23:24.571 [INFO][4922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a83589c5-3f06-47b3-8533-6e8d610b7e5a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"coredns-66bc5c9577-k5c5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic704c3e0827", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.597132 containerd[1713]: 2025-11-01 00:23:24.571 [INFO][4922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.131/32] ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.597132 containerd[1713]: 2025-11-01 00:23:24.571 [INFO][4922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic704c3e0827 ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.597132 containerd[1713]: 2025-11-01 00:23:24.579 [INFO][4922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.597132 containerd[1713]: 2025-11-01 00:23:24.580 [INFO][4922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a83589c5-3f06-47b3-8533-6e8d610b7e5a", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882", Pod:"coredns-66bc5c9577-k5c5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic704c3e0827", MAC:"e2:17:3a:fd:d0:bc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.597491 containerd[1713]: 2025-11-01 00:23:24.593 [INFO][4922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882" Namespace="kube-system" Pod="coredns-66bc5c9577-k5c5g" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:24.625626 containerd[1713]: time="2025-11-01T00:23:24.625435284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.626255 containerd[1713]: time="2025-11-01T00:23:24.625986091Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.626255 containerd[1713]: time="2025-11-01T00:23:24.626072092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.626481 containerd[1713]: time="2025-11-01T00:23:24.626234994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.656714 systemd[1]: Started cri-containerd-54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882.scope - libcontainer container 54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882. Nov 1 00:23:24.689029 systemd-networkd[1344]: calif59f37f9aa6: Link UP Nov 1 00:23:24.691739 systemd-networkd[1344]: calif59f37f9aa6: Gained carrier Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.474 [INFO][4932] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0 calico-apiserver-6f9c5c4598- calico-apiserver d8da81c8-f689-4aff-8f06-3115f31a2434 935 0 2025-11-01 00:22:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9c5c4598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 calico-apiserver-6f9c5c4598-f295m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif59f37f9aa6 [] [] }} ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.475 [INFO][4932] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.553 [INFO][4962] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" HandleID="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.554 [INFO][4962] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" HandleID="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c81e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-534d15dd10", "pod":"calico-apiserver-6f9c5c4598-f295m", "timestamp":"2025-11-01 00:23:24.553876526 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.555 [INFO][4962] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.568 [INFO][4962] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.569 [INFO][4962] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.629 [INFO][4962] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.635 [INFO][4962] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.647 [INFO][4962] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.653 [INFO][4962] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.655 [INFO][4962] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.655 [INFO][4962] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.658 [INFO][4962] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.667 [INFO][4962] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.678 [INFO][4962] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.132/26] block=192.168.34.128/26 handle="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.678 [INFO][4962] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.132/26] handle="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.679 [INFO][4962] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.735712 containerd[1713]: 2025-11-01 00:23:24.679 [INFO][4962] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.132/26] IPv6=[] ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" HandleID="k8s-pod-network.938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.684 [INFO][4932] cni-plugin/k8s.go 418: Populated endpoint ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8da81c8-f689-4aff-8f06-3115f31a2434", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"calico-apiserver-6f9c5c4598-f295m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif59f37f9aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.684 [INFO][4932] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.132/32] ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.684 [INFO][4932] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif59f37f9aa6 ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.693 [INFO][4932] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.696 [INFO][4932] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8da81c8-f689-4aff-8f06-3115f31a2434", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f", Pod:"calico-apiserver-6f9c5c4598-f295m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif59f37f9aa6", MAC:"fa:71:a0:71:7a:67", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.736358 containerd[1713]: 2025-11-01 00:23:24.725 [INFO][4932] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-f295m" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:24.763981 containerd[1713]: time="2025-11-01T00:23:24.763755644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-k5c5g,Uid:a83589c5-3f06-47b3-8533-6e8d610b7e5a,Namespace:kube-system,Attempt:1,} returns sandbox id \"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882\"" Nov 1 00:23:24.775932 containerd[1713]: time="2025-11-01T00:23:24.775866789Z" level=info msg="CreateContainer within sandbox \"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:24.797354 containerd[1713]: time="2025-11-01T00:23:24.795291022Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.797354 containerd[1713]: time="2025-11-01T00:23:24.797167445Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.797354 containerd[1713]: time="2025-11-01T00:23:24.797187845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.797354 containerd[1713]: time="2025-11-01T00:23:24.797289346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.806822 systemd-networkd[1344]: cali823d1d4a942: Link UP Nov 1 00:23:24.810065 systemd-networkd[1344]: cali823d1d4a942: Gained carrier Nov 1 00:23:24.826958 containerd[1713]: time="2025-11-01T00:23:24.826768800Z" level=info msg="CreateContainer within sandbox \"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bfe05a2167b10e2ce957001fa585f7b15c191655527ed354410f7e6d9b606e2\"" Nov 1 00:23:24.830134 containerd[1713]: time="2025-11-01T00:23:24.829901337Z" level=info msg="StartContainer for \"9bfe05a2167b10e2ce957001fa585f7b15c191655527ed354410f7e6d9b606e2\"" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.509 [INFO][4943] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0 goldmane-7c778bb748- calico-system 389b7b2a-9963-4ce4-a0c8-a7f3fe88a917 934 0 2025-11-01 00:22:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 goldmane-7c778bb748-f7h6c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali823d1d4a942 [] [] }} ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.510 [INFO][4943] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.561 [INFO][4970] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" HandleID="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.561 [INFO][4970] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" HandleID="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003058c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"goldmane-7c778bb748-f7h6c", "timestamp":"2025-11-01 00:23:24.561167413 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.561 [INFO][4970] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.679 [INFO][4970] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.679 [INFO][4970] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.729 [INFO][4970] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.744 [INFO][4970] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.751 [INFO][4970] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.755 [INFO][4970] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.757 [INFO][4970] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.757 [INFO][4970] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.762 [INFO][4970] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856 Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.779 [INFO][4970] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.789 [INFO][4970] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.133/26] block=192.168.34.128/26 handle="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.789 [INFO][4970] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.133/26] handle="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.789 [INFO][4970] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:24.845126 containerd[1713]: 2025-11-01 00:23:24.789 [INFO][4970] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.133/26] IPv6=[] ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" HandleID="k8s-pod-network.41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.796 [INFO][4943] cni-plugin/k8s.go 418: Populated endpoint ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"goldmane-7c778bb748-f7h6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali823d1d4a942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.800 [INFO][4943] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.133/32] ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.800 [INFO][4943] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali823d1d4a942 ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.809 [INFO][4943] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.810 [INFO][4943] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856", Pod:"goldmane-7c778bb748-f7h6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali823d1d4a942", MAC:"12:0c:37:f9:9b:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:24.847403 containerd[1713]: 2025-11-01 00:23:24.833 [INFO][4943] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856" Namespace="calico-system" Pod="goldmane-7c778bb748-f7h6c" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:24.847782 systemd[1]: Started cri-containerd-938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f.scope - libcontainer container 938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f. Nov 1 00:23:24.904840 systemd[1]: Started cri-containerd-9bfe05a2167b10e2ce957001fa585f7b15c191655527ed354410f7e6d9b606e2.scope - libcontainer container 9bfe05a2167b10e2ce957001fa585f7b15c191655527ed354410f7e6d9b606e2. Nov 1 00:23:24.909771 containerd[1713]: time="2025-11-01T00:23:24.909670194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:24.910743 containerd[1713]: time="2025-11-01T00:23:24.909806696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:24.910743 containerd[1713]: time="2025-11-01T00:23:24.909833696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.912163 containerd[1713]: time="2025-11-01T00:23:24.910991910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:24.954729 systemd[1]: Started cri-containerd-41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856.scope - libcontainer container 41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856. Nov 1 00:23:25.000310 containerd[1713]: time="2025-11-01T00:23:25.000118280Z" level=info msg="StartContainer for \"9bfe05a2167b10e2ce957001fa585f7b15c191655527ed354410f7e6d9b606e2\" returns successfully" Nov 1 00:23:25.069004 systemd-networkd[1344]: cali7246475627f: Gained IPv6LL Nov 1 00:23:25.081116 containerd[1713]: time="2025-11-01T00:23:25.080935349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-f295m,Uid:d8da81c8-f689-4aff-8f06-3115f31a2434,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f\"" Nov 1 00:23:25.086523 containerd[1713]: time="2025-11-01T00:23:25.086490316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:25.090551 containerd[1713]: time="2025-11-01T00:23:25.089260849Z" level=info msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" Nov 1 00:23:25.091565 containerd[1713]: time="2025-11-01T00:23:25.091511576Z" level=info msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" Nov 1 00:23:25.114710 containerd[1713]: time="2025-11-01T00:23:25.112933033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-f7h6c,Uid:389b7b2a-9963-4ce4-a0c8-a7f3fe88a917,Namespace:calico-system,Attempt:1,} returns sandbox id \"41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856\"" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.192 [INFO][5180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.193 [INFO][5180] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" iface="eth0" netns="/var/run/netns/cni-fc40127a-4ac7-4561-c6d1-8adf91729aa4" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.193 [INFO][5180] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" iface="eth0" netns="/var/run/netns/cni-fc40127a-4ac7-4561-c6d1-8adf91729aa4" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.193 [INFO][5180] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" iface="eth0" netns="/var/run/netns/cni-fc40127a-4ac7-4561-c6d1-8adf91729aa4" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.193 [INFO][5180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.194 [INFO][5180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.255 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.255 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.255 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.269 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.269 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.271 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.277813 containerd[1713]: 2025-11-01 00:23:25.273 [INFO][5180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:25.279713 containerd[1713]: time="2025-11-01T00:23:25.279062827Z" level=info msg="TearDown network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" successfully" Nov 1 00:23:25.279713 containerd[1713]: time="2025-11-01T00:23:25.279099427Z" level=info msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" returns successfully" Nov 1 00:23:25.285852 systemd[1]: run-netns-cni\x2dfc40127a\x2d4ac7\x2d4561\x2dc6d1\x2d8adf91729aa4.mount: Deactivated successfully. Nov 1 00:23:25.288497 containerd[1713]: time="2025-11-01T00:23:25.288108935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-4kvfs,Uid:1e7f5e79-08c7-4630-a4c4-82d9824187a0,Namespace:calico-apiserver,Attempt:1,}" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.203 [INFO][5187] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.204 [INFO][5187] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" iface="eth0" netns="/var/run/netns/cni-4a2c6fe5-f6a4-b8b4-7943-64e0ef3a0b08" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.204 [INFO][5187] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" iface="eth0" netns="/var/run/netns/cni-4a2c6fe5-f6a4-b8b4-7943-64e0ef3a0b08" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.205 [INFO][5187] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" iface="eth0" netns="/var/run/netns/cni-4a2c6fe5-f6a4-b8b4-7943-64e0ef3a0b08" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.205 [INFO][5187] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.205 [INFO][5187] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.268 [INFO][5201] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.268 [INFO][5201] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.271 [INFO][5201] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.284 [WARNING][5201] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.284 [INFO][5201] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.287 [INFO][5201] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.290251 containerd[1713]: 2025-11-01 00:23:25.288 [INFO][5187] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:25.292613 containerd[1713]: time="2025-11-01T00:23:25.290678266Z" level=info msg="TearDown network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" successfully" Nov 1 00:23:25.292613 containerd[1713]: time="2025-11-01T00:23:25.290718966Z" level=info msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" returns successfully" Nov 1 00:23:25.295923 systemd[1]: run-netns-cni\x2d4a2c6fe5\x2df6a4\x2db8b4\x2d7943\x2d64e0ef3a0b08.mount: Deactivated successfully. Nov 1 00:23:25.297011 containerd[1713]: time="2025-11-01T00:23:25.296981542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trnvf,Uid:763cf2c8-d06c-456e-8d46-4720620695a1,Namespace:calico-system,Attempt:1,}" Nov 1 00:23:25.344595 containerd[1713]: time="2025-11-01T00:23:25.344274609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:25.351794 containerd[1713]: time="2025-11-01T00:23:25.351466395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:25.351794 containerd[1713]: time="2025-11-01T00:23:25.351740399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:25.352761 kubelet[3194]: E1101 00:23:25.352148 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.352761 kubelet[3194]: E1101 00:23:25.352203 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.352761 kubelet[3194]: E1101 00:23:25.352382 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:25.352761 kubelet[3194]: E1101 00:23:25.352429 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:25.354974 containerd[1713]: time="2025-11-01T00:23:25.354838736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:25.397943 kubelet[3194]: E1101 00:23:25.397836 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:25.398680 kubelet[3194]: E1101 00:23:25.398448 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:25.443724 kubelet[3194]: I1101 00:23:25.443207 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k5c5g" podStartSLOduration=44.443186896 podStartE2EDuration="44.443186896s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:25.396190132 +0000 UTC m=+49.830310043" watchObservedRunningTime="2025-11-01 00:23:25.443186896 +0000 UTC m=+49.877306807" Nov 1 00:23:25.581831 systemd-networkd[1344]: cali899eedf65d4: Link UP Nov 1 00:23:25.582057 systemd-networkd[1344]: cali899eedf65d4: Gained carrier Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.456 [INFO][5211] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0 calico-apiserver-6f9c5c4598- calico-apiserver 1e7f5e79-08c7-4630-a4c4-82d9824187a0 961 0 2025-11-01 00:22:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f9c5c4598 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 calico-apiserver-6f9c5c4598-4kvfs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali899eedf65d4 [] [] }} ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.457 [INFO][5211] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.516 [INFO][5244] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" HandleID="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.516 [INFO][5244] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" HandleID="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5a30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-534d15dd10", "pod":"calico-apiserver-6f9c5c4598-4kvfs", "timestamp":"2025-11-01 00:23:25.51604747 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.517 [INFO][5244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.517 [INFO][5244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.517 [INFO][5244] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.541 [INFO][5244] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.545 [INFO][5244] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.549 [INFO][5244] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.551 [INFO][5244] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.552 [INFO][5244] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.553 [INFO][5244] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.554 [INFO][5244] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110 Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.563 [INFO][5244] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5244] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.134/26] block=192.168.34.128/26 handle="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5244] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.134/26] handle="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.600165 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5244] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.134/26] IPv6=[] ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" HandleID="k8s-pod-network.ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.574 [INFO][5211] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7f5e79-08c7-4630-a4c4-82d9824187a0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"calico-apiserver-6f9c5c4598-4kvfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali899eedf65d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.575 [INFO][5211] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.134/32] ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.575 [INFO][5211] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali899eedf65d4 ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.579 [INFO][5211] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.579 [INFO][5211] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7f5e79-08c7-4630-a4c4-82d9824187a0", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110", Pod:"calico-apiserver-6f9c5c4598-4kvfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali899eedf65d4", MAC:"ce:5a:92:b3:79:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.602516 containerd[1713]: 2025-11-01 00:23:25.597 [INFO][5211] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110" Namespace="calico-apiserver" Pod="calico-apiserver-6f9c5c4598-4kvfs" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:25.619850 containerd[1713]: time="2025-11-01T00:23:25.619812015Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:25.623069 containerd[1713]: time="2025-11-01T00:23:25.623021254Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:25.623319 containerd[1713]: time="2025-11-01T00:23:25.623188956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:25.623676 kubelet[3194]: E1101 00:23:25.623632 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:25.623828 kubelet[3194]: E1101 00:23:25.623806 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:25.624444 kubelet[3194]: E1101 00:23:25.624069 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:25.624444 kubelet[3194]: E1101 00:23:25.624121 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:25.632570 containerd[1713]: time="2025-11-01T00:23:25.631639957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:25.632570 containerd[1713]: time="2025-11-01T00:23:25.631697058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:25.632570 containerd[1713]: time="2025-11-01T00:23:25.631740358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:25.632570 containerd[1713]: time="2025-11-01T00:23:25.631849559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:25.665702 systemd[1]: Started cri-containerd-ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110.scope - libcontainer container ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110. Nov 1 00:23:25.689810 systemd-networkd[1344]: cali555939f863c: Link UP Nov 1 00:23:25.691501 systemd-networkd[1344]: cali555939f863c: Gained carrier Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.448 [INFO][5220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0 csi-node-driver- calico-system 763cf2c8-d06c-456e-8d46-4720620695a1 962 0 2025-11-01 00:22:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 csi-node-driver-trnvf eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali555939f863c [] [] }} ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.448 [INFO][5220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.531 [INFO][5239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" HandleID="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.531 [INFO][5239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" HandleID="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f690), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"csi-node-driver-trnvf", "timestamp":"2025-11-01 00:23:25.531598057 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.532 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.572 [INFO][5239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.641 [INFO][5239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.648 [INFO][5239] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.654 [INFO][5239] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.656 [INFO][5239] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.659 [INFO][5239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.659 [INFO][5239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.662 [INFO][5239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739 Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.667 [INFO][5239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.683 [INFO][5239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.135/26] block=192.168.34.128/26 handle="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.683 [INFO][5239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.135/26] handle="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.683 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.718442 containerd[1713]: 2025-11-01 00:23:25.683 [INFO][5239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.135/26] IPv6=[] ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" HandleID="k8s-pod-network.f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.687 [INFO][5220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"763cf2c8-d06c-456e-8d46-4720620695a1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"csi-node-driver-trnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali555939f863c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.687 [INFO][5220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.135/32] ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.687 [INFO][5220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali555939f863c ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.690 [INFO][5220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.690 [INFO][5220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"763cf2c8-d06c-456e-8d46-4720620695a1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739", Pod:"csi-node-driver-trnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali555939f863c", MAC:"02:d9:9e:be:04:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.719383 containerd[1713]: 2025-11-01 00:23:25.713 [INFO][5220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739" Namespace="calico-system" Pod="csi-node-driver-trnvf" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:25.753185 containerd[1713]: time="2025-11-01T00:23:25.752880312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:25.753834 containerd[1713]: time="2025-11-01T00:23:25.753654721Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:25.753834 containerd[1713]: time="2025-11-01T00:23:25.753674121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:25.753834 containerd[1713]: time="2025-11-01T00:23:25.753757122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:25.765086 containerd[1713]: time="2025-11-01T00:23:25.764688253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f9c5c4598-4kvfs,Uid:1e7f5e79-08c7-4630-a4c4-82d9824187a0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110\"" Nov 1 00:23:25.767665 containerd[1713]: time="2025-11-01T00:23:25.767604288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:25.780734 systemd[1]: Started cri-containerd-f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739.scope - libcontainer container f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739. Nov 1 00:23:25.807895 containerd[1713]: time="2025-11-01T00:23:25.807852571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-trnvf,Uid:763cf2c8-d06c-456e-8d46-4720620695a1,Namespace:calico-system,Attempt:1,} returns sandbox id \"f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739\"" Nov 1 00:23:25.964701 systemd-networkd[1344]: calic704c3e0827: Gained IPv6LL Nov 1 00:23:26.017443 containerd[1713]: time="2025-11-01T00:23:26.017389985Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:26.023546 containerd[1713]: time="2025-11-01T00:23:26.023441958Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:26.023790 containerd[1713]: time="2025-11-01T00:23:26.023576159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:26.023842 kubelet[3194]: E1101 00:23:26.023793 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:26.024027 kubelet[3194]: E1101 00:23:26.023853 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:26.024458 kubelet[3194]: E1101 00:23:26.024065 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:26.024458 kubelet[3194]: E1101 00:23:26.024118 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:26.024723 containerd[1713]: time="2025-11-01T00:23:26.024635872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:26.276396 containerd[1713]: time="2025-11-01T00:23:26.276352792Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:26.282580 containerd[1713]: time="2025-11-01T00:23:26.282521866Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:26.282701 containerd[1713]: time="2025-11-01T00:23:26.282549567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:26.282867 kubelet[3194]: E1101 00:23:26.282824 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:26.282953 kubelet[3194]: E1101 00:23:26.282877 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:26.283001 kubelet[3194]: E1101 00:23:26.282979 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:26.284709 containerd[1713]: time="2025-11-01T00:23:26.284670992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:26.399260 kubelet[3194]: E1101 00:23:26.398995 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:26.399260 kubelet[3194]: E1101 00:23:26.399049 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:26.399260 kubelet[3194]: E1101 00:23:26.399142 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:26.527981 containerd[1713]: time="2025-11-01T00:23:26.527429605Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:26.532274 containerd[1713]: time="2025-11-01T00:23:26.531922359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:26.532274 containerd[1713]: time="2025-11-01T00:23:26.532047160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:26.532447 kubelet[3194]: E1101 00:23:26.532191 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:26.532447 kubelet[3194]: E1101 00:23:26.532237 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:26.533047 kubelet[3194]: E1101 00:23:26.532739 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:26.533047 kubelet[3194]: E1101 00:23:26.532998 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:26.732823 systemd-networkd[1344]: calif59f37f9aa6: Gained IPv6LL Nov 1 00:23:26.733287 systemd-networkd[1344]: cali823d1d4a942: Gained IPv6LL Nov 1 00:23:27.052807 systemd-networkd[1344]: cali555939f863c: Gained IPv6LL Nov 1 00:23:27.089449 containerd[1713]: time="2025-11-01T00:23:27.089401948Z" level=info msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.160 [INFO][5373] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.160 [INFO][5373] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" iface="eth0" netns="/var/run/netns/cni-fa6414ed-a8e0-ce0c-43c5-59d3f6d7330a" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.161 [INFO][5373] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" iface="eth0" netns="/var/run/netns/cni-fa6414ed-a8e0-ce0c-43c5-59d3f6d7330a" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.161 [INFO][5373] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" iface="eth0" netns="/var/run/netns/cni-fa6414ed-a8e0-ce0c-43c5-59d3f6d7330a" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.161 [INFO][5373] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.161 [INFO][5373] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.195 [INFO][5380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.195 [INFO][5380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.195 [INFO][5380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.206 [WARNING][5380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.206 [INFO][5380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.207 [INFO][5380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:27.210613 containerd[1713]: 2025-11-01 00:23:27.208 [INFO][5373] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:27.212691 containerd[1713]: time="2025-11-01T00:23:27.212630926Z" level=info msg="TearDown network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" successfully" Nov 1 00:23:27.212691 containerd[1713]: time="2025-11-01T00:23:27.212676427Z" level=info msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" returns successfully" Nov 1 00:23:27.216242 systemd[1]: run-netns-cni\x2dfa6414ed\x2da8e0\x2dce0c\x2d43c5\x2d59d3f6d7330a.mount: Deactivated successfully. Nov 1 00:23:27.225027 containerd[1713]: time="2025-11-01T00:23:27.224691771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9d267,Uid:79bd8e75-9a83-49b2-ac1b-70aed374e2d6,Namespace:kube-system,Attempt:1,}" Nov 1 00:23:27.373123 systemd-networkd[1344]: cali899eedf65d4: Gained IPv6LL Nov 1 00:23:27.376285 systemd-networkd[1344]: cali352cf35b7cd: Link UP Nov 1 00:23:27.379932 systemd-networkd[1344]: cali352cf35b7cd: Gained carrier Nov 1 00:23:27.407779 kubelet[3194]: E1101 00:23:27.407728 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:27.409189 kubelet[3194]: E1101 00:23:27.408354 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.289 [INFO][5387] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0 coredns-66bc5c9577- kube-system 79bd8e75-9a83-49b2-ac1b-70aed374e2d6 1016 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-534d15dd10 coredns-66bc5c9577-9d267 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali352cf35b7cd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.291 [INFO][5387] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.315 [INFO][5399] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" HandleID="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.315 [INFO][5399] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" HandleID="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-534d15dd10", "pod":"coredns-66bc5c9577-9d267", "timestamp":"2025-11-01 00:23:27.315787264 +0000 UTC"}, Hostname:"ci-4081.3.6-n-534d15dd10", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.316 [INFO][5399] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.316 [INFO][5399] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.316 [INFO][5399] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-534d15dd10' Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.322 [INFO][5399] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.326 [INFO][5399] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.331 [INFO][5399] ipam/ipam.go 511: Trying affinity for 192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.334 [INFO][5399] ipam/ipam.go 158: Attempting to load block cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.338 [INFO][5399] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.34.128/26 host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.338 [INFO][5399] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.34.128/26 handle="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.340 [INFO][5399] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353 Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.347 [INFO][5399] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.34.128/26 handle="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.367 [INFO][5399] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.34.136/26] block=192.168.34.128/26 handle="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.367 [INFO][5399] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.34.136/26] handle="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" host="ci-4081.3.6-n-534d15dd10" Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.368 [INFO][5399] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:27.412493 containerd[1713]: 2025-11-01 00:23:27.368 [INFO][5399] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.34.136/26] IPv6=[] ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" HandleID="k8s-pod-network.f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.414026 containerd[1713]: 2025-11-01 00:23:27.370 [INFO][5387] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"79bd8e75-9a83-49b2-ac1b-70aed374e2d6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"", Pod:"coredns-66bc5c9577-9d267", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352cf35b7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:27.414026 containerd[1713]: 2025-11-01 00:23:27.370 [INFO][5387] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.34.136/32] ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.414026 containerd[1713]: 2025-11-01 00:23:27.370 [INFO][5387] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali352cf35b7cd ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.414026 containerd[1713]: 2025-11-01 00:23:27.379 [INFO][5387] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.414026 containerd[1713]: 2025-11-01 00:23:27.379 [INFO][5387] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"79bd8e75-9a83-49b2-ac1b-70aed374e2d6", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353", Pod:"coredns-66bc5c9577-9d267", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352cf35b7cd", MAC:"9a:ed:1a:8f:7b:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:27.414406 containerd[1713]: 2025-11-01 00:23:27.406 [INFO][5387] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353" Namespace="kube-system" Pod="coredns-66bc5c9577-9d267" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:27.454751 containerd[1713]: time="2025-11-01T00:23:27.452458804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 1 00:23:27.454751 containerd[1713]: time="2025-11-01T00:23:27.452528805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 1 00:23:27.454751 containerd[1713]: time="2025-11-01T00:23:27.452566205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:27.454751 containerd[1713]: time="2025-11-01T00:23:27.452666406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 1 00:23:27.506719 systemd[1]: Started cri-containerd-f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353.scope - libcontainer container f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353. Nov 1 00:23:27.600464 containerd[1713]: time="2025-11-01T00:23:27.600417379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9d267,Uid:79bd8e75-9a83-49b2-ac1b-70aed374e2d6,Namespace:kube-system,Attempt:1,} returns sandbox id \"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353\"" Nov 1 00:23:27.608602 containerd[1713]: time="2025-11-01T00:23:27.608453975Z" level=info msg="CreateContainer within sandbox \"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:27.637009 containerd[1713]: time="2025-11-01T00:23:27.636817016Z" level=info msg="CreateContainer within sandbox \"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c1f72bc57ff5aebe29a432855a405ba8145a14add9265ec81d590ec969276d7a\"" Nov 1 00:23:27.639350 containerd[1713]: time="2025-11-01T00:23:27.638352534Z" level=info msg="StartContainer for \"c1f72bc57ff5aebe29a432855a405ba8145a14add9265ec81d590ec969276d7a\"" Nov 1 00:23:27.668707 systemd[1]: Started cri-containerd-c1f72bc57ff5aebe29a432855a405ba8145a14add9265ec81d590ec969276d7a.scope - libcontainer container c1f72bc57ff5aebe29a432855a405ba8145a14add9265ec81d590ec969276d7a. Nov 1 00:23:27.696614 containerd[1713]: time="2025-11-01T00:23:27.696552532Z" level=info msg="StartContainer for \"c1f72bc57ff5aebe29a432855a405ba8145a14add9265ec81d590ec969276d7a\" returns successfully" Nov 1 00:23:28.442092 kubelet[3194]: I1101 00:23:28.441905 3194 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9d267" podStartSLOduration=47.441883475 podStartE2EDuration="47.441883475s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:28.422157739 +0000 UTC m=+52.856277650" watchObservedRunningTime="2025-11-01 00:23:28.441883475 +0000 UTC m=+52.876003386" Nov 1 00:23:29.228791 systemd-networkd[1344]: cali352cf35b7cd: Gained IPv6LL Nov 1 00:23:34.093048 containerd[1713]: time="2025-11-01T00:23:34.092043104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:34.345366 containerd[1713]: time="2025-11-01T00:23:34.345103351Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:34.348023 containerd[1713]: time="2025-11-01T00:23:34.347975986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:34.348123 containerd[1713]: time="2025-11-01T00:23:34.348083687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:34.348379 kubelet[3194]: E1101 00:23:34.348339 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:34.349313 kubelet[3194]: E1101 00:23:34.348390 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:34.349313 kubelet[3194]: E1101 00:23:34.348488 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:34.349580 containerd[1713]: time="2025-11-01T00:23:34.349551805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:34.586681 containerd[1713]: time="2025-11-01T00:23:34.586622060Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:34.591586 containerd[1713]: time="2025-11-01T00:23:34.591443018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:34.591586 containerd[1713]: time="2025-11-01T00:23:34.591504519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:34.591779 kubelet[3194]: E1101 00:23:34.591725 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:34.591884 kubelet[3194]: E1101 00:23:34.591783 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:34.591941 kubelet[3194]: E1101 00:23:34.591884 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:34.591998 kubelet[3194]: E1101 00:23:34.591946 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:23:36.076501 containerd[1713]: time="2025-11-01T00:23:36.076076996Z" level=info msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.113 [WARNING][5513] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"763cf2c8-d06c-456e-8d46-4720620695a1", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739", Pod:"csi-node-driver-trnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali555939f863c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.114 [INFO][5513] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.114 [INFO][5513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" iface="eth0" netns="" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.114 [INFO][5513] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.114 [INFO][5513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.136 [INFO][5523] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.136 [INFO][5523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.136 [INFO][5523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.143 [WARNING][5523] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.143 [INFO][5523] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.144 [INFO][5523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.147031 containerd[1713]: 2025-11-01 00:23:36.145 [INFO][5513] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.147866 containerd[1713]: time="2025-11-01T00:23:36.147075951Z" level=info msg="TearDown network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" successfully" Nov 1 00:23:36.147866 containerd[1713]: time="2025-11-01T00:23:36.147106051Z" level=info msg="StopPodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" returns successfully" Nov 1 00:23:36.147866 containerd[1713]: time="2025-11-01T00:23:36.147685758Z" level=info msg="RemovePodSandbox for \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" Nov 1 00:23:36.147866 containerd[1713]: time="2025-11-01T00:23:36.147721459Z" level=info msg="Forcibly stopping sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\"" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.185 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"763cf2c8-d06c-456e-8d46-4720620695a1", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f59cf44931d7d948c504c26ef1e6fd10db7362e8ecbaa4db331126db7f6b1739", Pod:"csi-node-driver-trnvf", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali555939f863c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.185 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.185 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" iface="eth0" netns="" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.185 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.185 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.205 [INFO][5544] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.205 [INFO][5544] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.205 [INFO][5544] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.212 [WARNING][5544] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.212 [INFO][5544] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" HandleID="k8s-pod-network.337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Workload="ci--4081.3.6--n--534d15dd10-k8s-csi--node--driver--trnvf-eth0" Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.214 [INFO][5544] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.217944 containerd[1713]: 2025-11-01 00:23:36.215 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9" Nov 1 00:23:36.217944 containerd[1713]: time="2025-11-01T00:23:36.216615689Z" level=info msg="TearDown network for sandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" successfully" Nov 1 00:23:36.229222 containerd[1713]: time="2025-11-01T00:23:36.229032838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.229222 containerd[1713]: time="2025-11-01T00:23:36.229098939Z" level=info msg="RemovePodSandbox \"337310372dc6220be21c65ca4a60ce03c75ef5d830c17b078fc73bf9d16108b9\" returns successfully" Nov 1 00:23:36.229945 containerd[1713]: time="2025-11-01T00:23:36.229677446Z" level=info msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.265 [WARNING][5558] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a83589c5-3f06-47b3-8533-6e8d610b7e5a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882", Pod:"coredns-66bc5c9577-k5c5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic704c3e0827", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.265 [INFO][5558] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.266 [INFO][5558] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" iface="eth0" netns="" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.266 [INFO][5558] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.266 [INFO][5558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.288 [INFO][5565] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.288 [INFO][5565] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.288 [INFO][5565] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.296 [WARNING][5565] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.296 [INFO][5565] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.297 [INFO][5565] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.299920 containerd[1713]: 2025-11-01 00:23:36.298 [INFO][5558] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.300694 containerd[1713]: time="2025-11-01T00:23:36.299961992Z" level=info msg="TearDown network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" successfully" Nov 1 00:23:36.300694 containerd[1713]: time="2025-11-01T00:23:36.299994393Z" level=info msg="StopPodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" returns successfully" Nov 1 00:23:36.300694 containerd[1713]: time="2025-11-01T00:23:36.300550999Z" level=info msg="RemovePodSandbox for \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" Nov 1 00:23:36.300694 containerd[1713]: time="2025-11-01T00:23:36.300589000Z" level=info msg="Forcibly stopping sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\"" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.342 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a83589c5-3f06-47b3-8533-6e8d610b7e5a", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"54d69ab2a881c500828eaf3ee60d239056a7b3b17b462eeef4b857ee6d93a882", Pod:"coredns-66bc5c9577-k5c5g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic704c3e0827", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.343 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.343 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" iface="eth0" netns="" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.343 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.343 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.368 [INFO][5587] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.368 [INFO][5587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.368 [INFO][5587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.373 [WARNING][5587] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.374 [INFO][5587] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" HandleID="k8s-pod-network.febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--k5c5g-eth0" Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.375 [INFO][5587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.379348 containerd[1713]: 2025-11-01 00:23:36.376 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4" Nov 1 00:23:36.379348 containerd[1713]: time="2025-11-01T00:23:36.377459225Z" level=info msg="TearDown network for sandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" successfully" Nov 1 00:23:36.391773 containerd[1713]: time="2025-11-01T00:23:36.391731497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.391899 containerd[1713]: time="2025-11-01T00:23:36.391827698Z" level=info msg="RemovePodSandbox \"febe2a61819ec801fca16f332cf460091d26d4bdd2198647977d152bcbfb7ae4\" returns successfully" Nov 1 00:23:36.392338 containerd[1713]: time="2025-11-01T00:23:36.392310604Z" level=info msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.424 [WARNING][5602] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856", Pod:"goldmane-7c778bb748-f7h6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali823d1d4a942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.424 [INFO][5602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.424 [INFO][5602] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" iface="eth0" netns="" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.424 [INFO][5602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.424 [INFO][5602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.447 [INFO][5609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.447 [INFO][5609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.447 [INFO][5609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.453 [WARNING][5609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.453 [INFO][5609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.454 [INFO][5609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.456869 containerd[1713]: 2025-11-01 00:23:36.455 [INFO][5602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.457700 containerd[1713]: time="2025-11-01T00:23:36.457351188Z" level=info msg="TearDown network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" successfully" Nov 1 00:23:36.457700 containerd[1713]: time="2025-11-01T00:23:36.457379188Z" level=info msg="StopPodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" returns successfully" Nov 1 00:23:36.458292 containerd[1713]: time="2025-11-01T00:23:36.458019296Z" level=info msg="RemovePodSandbox for \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" Nov 1 00:23:36.458292 containerd[1713]: time="2025-11-01T00:23:36.458052996Z" level=info msg="Forcibly stopping sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\"" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.489 [WARNING][5623] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"389b7b2a-9963-4ce4-a0c8-a7f3fe88a917", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"41f9d0fe5152f8439e1faa06b81db0bbbb78716f156823c0c67229413a534856", Pod:"goldmane-7c778bb748-f7h6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.34.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali823d1d4a942", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.490 [INFO][5623] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.490 [INFO][5623] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" iface="eth0" netns="" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.490 [INFO][5623] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.490 [INFO][5623] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.510 [INFO][5631] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.510 [INFO][5631] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.510 [INFO][5631] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.516 [WARNING][5631] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.516 [INFO][5631] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" HandleID="k8s-pod-network.3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Workload="ci--4081.3.6--n--534d15dd10-k8s-goldmane--7c778bb748--f7h6c-eth0" Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.518 [INFO][5631] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.520741 containerd[1713]: 2025-11-01 00:23:36.519 [INFO][5623] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b" Nov 1 00:23:36.521578 containerd[1713]: time="2025-11-01T00:23:36.520822852Z" level=info msg="TearDown network for sandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" successfully" Nov 1 00:23:36.529066 containerd[1713]: time="2025-11-01T00:23:36.529010250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.529225 containerd[1713]: time="2025-11-01T00:23:36.529080551Z" level=info msg="RemovePodSandbox \"3f6bf316db4f84bc673d82075ea5e8c9202d99e3ca9fb6786be618052e8b733b\" returns successfully" Nov 1 00:23:36.529831 containerd[1713]: time="2025-11-01T00:23:36.529560457Z" level=info msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.562 [WARNING][5645] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0", GenerateName:"calico-kube-controllers-d9dc766d8-", Namespace:"calico-system", SelfLink:"", UID:"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9dc766d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3", Pod:"calico-kube-controllers-d9dc766d8-sj8dp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7246475627f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.562 [INFO][5645] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.562 [INFO][5645] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" iface="eth0" netns="" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.562 [INFO][5645] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.562 [INFO][5645] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.581 [INFO][5652] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.581 [INFO][5652] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.581 [INFO][5652] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.587 [WARNING][5652] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.587 [INFO][5652] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.589 [INFO][5652] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.591438 containerd[1713]: 2025-11-01 00:23:36.590 [INFO][5645] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.592024 containerd[1713]: time="2025-11-01T00:23:36.591490103Z" level=info msg="TearDown network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" successfully" Nov 1 00:23:36.592024 containerd[1713]: time="2025-11-01T00:23:36.591520603Z" level=info msg="StopPodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" returns successfully" Nov 1 00:23:36.592604 containerd[1713]: time="2025-11-01T00:23:36.592243312Z" level=info msg="RemovePodSandbox for \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" Nov 1 00:23:36.592604 containerd[1713]: time="2025-11-01T00:23:36.592278312Z" level=info msg="Forcibly stopping sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\"" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.623 [WARNING][5666] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0", GenerateName:"calico-kube-controllers-d9dc766d8-", Namespace:"calico-system", SelfLink:"", UID:"fcbbf525-3d8d-4b5d-819a-2cf75639fa8a", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"d9dc766d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"e5f1facbc27163fabac73afa34c032d771f3d3e91fc8a1b4a93a5203e5cfbdd3", Pod:"calico-kube-controllers-d9dc766d8-sj8dp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7246475627f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.623 [INFO][5666] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.623 [INFO][5666] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" iface="eth0" netns="" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.623 [INFO][5666] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.623 [INFO][5666] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.643 [INFO][5673] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.643 [INFO][5673] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.643 [INFO][5673] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.649 [WARNING][5673] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.649 [INFO][5673] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" HandleID="k8s-pod-network.c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--kube--controllers--d9dc766d8--sj8dp-eth0" Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.652 [INFO][5673] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.655653 containerd[1713]: 2025-11-01 00:23:36.653 [INFO][5666] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456" Nov 1 00:23:36.655653 containerd[1713]: time="2025-11-01T00:23:36.654650863Z" level=info msg="TearDown network for sandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" successfully" Nov 1 00:23:36.663605 containerd[1713]: time="2025-11-01T00:23:36.663512770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.663605 containerd[1713]: time="2025-11-01T00:23:36.663578771Z" level=info msg="RemovePodSandbox \"c25486c3f1cc5aa383ba90a302be2ad003f74dc4ac97d706ff5b51ca4b81d456\" returns successfully" Nov 1 00:23:36.664116 containerd[1713]: time="2025-11-01T00:23:36.664089077Z" level=info msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.697 [WARNING][5687] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7f5e79-08c7-4630-a4c4-82d9824187a0", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110", Pod:"calico-apiserver-6f9c5c4598-4kvfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali899eedf65d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.697 [INFO][5687] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.697 [INFO][5687] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" iface="eth0" netns="" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.697 [INFO][5687] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.697 [INFO][5687] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.717 [INFO][5694] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.718 [INFO][5694] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.718 [INFO][5694] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.726 [WARNING][5694] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.726 [INFO][5694] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.728 [INFO][5694] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.730390 containerd[1713]: 2025-11-01 00:23:36.729 [INFO][5687] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.730884 containerd[1713]: time="2025-11-01T00:23:36.730432480Z" level=info msg="TearDown network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" successfully" Nov 1 00:23:36.730884 containerd[1713]: time="2025-11-01T00:23:36.730463181Z" level=info msg="StopPodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" returns successfully" Nov 1 00:23:36.731347 containerd[1713]: time="2025-11-01T00:23:36.731316391Z" level=info msg="RemovePodSandbox for \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" Nov 1 00:23:36.731438 containerd[1713]: time="2025-11-01T00:23:36.731349692Z" level=info msg="Forcibly stopping sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\"" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.764 [WARNING][5709] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"1e7f5e79-08c7-4630-a4c4-82d9824187a0", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"ff041b72664dab105c71891bd3cf3b65e7880c7cbb3881d04873870ea1846110", Pod:"calico-apiserver-6f9c5c4598-4kvfs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali899eedf65d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.764 [INFO][5709] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.764 [INFO][5709] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" iface="eth0" netns="" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.764 [INFO][5709] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.765 [INFO][5709] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.786 [INFO][5716] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.787 [INFO][5716] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.787 [INFO][5716] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.794 [WARNING][5716] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.794 [INFO][5716] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" HandleID="k8s-pod-network.c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--4kvfs-eth0" Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.796 [INFO][5716] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.798415 containerd[1713]: 2025-11-01 00:23:36.797 [INFO][5709] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa" Nov 1 00:23:36.799087 containerd[1713]: time="2025-11-01T00:23:36.798460424Z" level=info msg="TearDown network for sandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" successfully" Nov 1 00:23:36.806293 containerd[1713]: time="2025-11-01T00:23:36.806257321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.806423 containerd[1713]: time="2025-11-01T00:23:36.806323021Z" level=info msg="RemovePodSandbox \"c1e2a5feb00386320071b36930de321436f058c55a7533b8cc0a667cc71febaa\" returns successfully" Nov 1 00:23:36.806886 containerd[1713]: time="2025-11-01T00:23:36.806855028Z" level=info msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.843 [WARNING][5730] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.843 [INFO][5730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.843 [INFO][5730] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" iface="eth0" netns="" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.843 [INFO][5730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.843 [INFO][5730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.865 [INFO][5738] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.865 [INFO][5738] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.865 [INFO][5738] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.871 [WARNING][5738] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.871 [INFO][5738] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.872 [INFO][5738] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.875194 containerd[1713]: 2025-11-01 00:23:36.873 [INFO][5730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.875194 containerd[1713]: time="2025-11-01T00:23:36.875053274Z" level=info msg="TearDown network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" successfully" Nov 1 00:23:36.875194 containerd[1713]: time="2025-11-01T00:23:36.875077574Z" level=info msg="StopPodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" returns successfully" Nov 1 00:23:36.876424 containerd[1713]: time="2025-11-01T00:23:36.876054186Z" level=info msg="RemovePodSandbox for \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" Nov 1 00:23:36.876424 containerd[1713]: time="2025-11-01T00:23:36.876110687Z" level=info msg="Forcibly stopping sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\"" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.910 [WARNING][5752] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" WorkloadEndpoint="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.910 [INFO][5752] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.910 [INFO][5752] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" iface="eth0" netns="" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.910 [INFO][5752] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.910 [INFO][5752] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.932 [INFO][5759] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.932 [INFO][5759] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.932 [INFO][5759] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.938 [WARNING][5759] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.938 [INFO][5759] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" HandleID="k8s-pod-network.958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Workload="ci--4081.3.6--n--534d15dd10-k8s-whisker--768d7d88cf--bc8c7-eth0" Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.940 [INFO][5759] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:36.942674 containerd[1713]: 2025-11-01 00:23:36.941 [INFO][5752] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783" Nov 1 00:23:36.942674 containerd[1713]: time="2025-11-01T00:23:36.942603511Z" level=info msg="TearDown network for sandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" successfully" Nov 1 00:23:36.949700 containerd[1713]: time="2025-11-01T00:23:36.949658599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:36.949835 containerd[1713]: time="2025-11-01T00:23:36.949717100Z" level=info msg="RemovePodSandbox \"958eea6fb0d6b011bca79923ff70453694cab93c29d2cab7dc485538ed6fd783\" returns successfully" Nov 1 00:23:36.950249 containerd[1713]: time="2025-11-01T00:23:36.950213506Z" level=info msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:36.987 [WARNING][5773] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8da81c8-f689-4aff-8f06-3115f31a2434", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f", Pod:"calico-apiserver-6f9c5c4598-f295m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif59f37f9aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:36.988 [INFO][5773] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:36.988 [INFO][5773] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" iface="eth0" netns="" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:36.988 [INFO][5773] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:36.988 [INFO][5773] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.008 [INFO][5780] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.008 [INFO][5780] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.008 [INFO][5780] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.014 [WARNING][5780] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.014 [INFO][5780] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.016 [INFO][5780] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.018284 containerd[1713]: 2025-11-01 00:23:37.017 [INFO][5773] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.019041 containerd[1713]: time="2025-11-01T00:23:37.018333651Z" level=info msg="TearDown network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" successfully" Nov 1 00:23:37.019041 containerd[1713]: time="2025-11-01T00:23:37.018362651Z" level=info msg="StopPodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" returns successfully" Nov 1 00:23:37.019208 containerd[1713]: time="2025-11-01T00:23:37.019143361Z" level=info msg="RemovePodSandbox for \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" Nov 1 00:23:37.019208 containerd[1713]: time="2025-11-01T00:23:37.019195261Z" level=info msg="Forcibly stopping sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\"" Nov 1 00:23:37.091347 containerd[1713]: time="2025-11-01T00:23:37.091311656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.058 [WARNING][5795] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0", GenerateName:"calico-apiserver-6f9c5c4598-", Namespace:"calico-apiserver", SelfLink:"", UID:"d8da81c8-f689-4aff-8f06-3115f31a2434", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f9c5c4598", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"938512d66ea08bc5f534ef414219bb8cb599ca79c1bfc911f0e027ea94eb0b2f", Pod:"calico-apiserver-6f9c5c4598-f295m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif59f37f9aa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.059 [INFO][5795] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.059 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" iface="eth0" netns="" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.059 [INFO][5795] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.059 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.080 [INFO][5802] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.080 [INFO][5802] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.080 [INFO][5802] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.086 [WARNING][5802] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.086 [INFO][5802] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" HandleID="k8s-pod-network.81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Workload="ci--4081.3.6--n--534d15dd10-k8s-calico--apiserver--6f9c5c4598--f295m-eth0" Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.087 [INFO][5802] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.093505 containerd[1713]: 2025-11-01 00:23:37.089 [INFO][5795] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7" Nov 1 00:23:37.094760 containerd[1713]: time="2025-11-01T00:23:37.093596284Z" level=info msg="TearDown network for sandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" successfully" Nov 1 00:23:37.104852 containerd[1713]: time="2025-11-01T00:23:37.104799523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:37.104951 containerd[1713]: time="2025-11-01T00:23:37.104869224Z" level=info msg="RemovePodSandbox \"81c6048fa6618b44df86a93046a3962597e1b729e29ad8f2ebecf4290303a2d7\" returns successfully" Nov 1 00:23:37.105358 containerd[1713]: time="2025-11-01T00:23:37.105329529Z" level=info msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.142 [WARNING][5816] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"79bd8e75-9a83-49b2-ac1b-70aed374e2d6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353", Pod:"coredns-66bc5c9577-9d267", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352cf35b7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.143 [INFO][5816] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.143 [INFO][5816] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" iface="eth0" netns="" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.143 [INFO][5816] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.143 [INFO][5816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.165 [INFO][5823] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.165 [INFO][5823] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.165 [INFO][5823] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.171 [WARNING][5823] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.171 [INFO][5823] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.173 [INFO][5823] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.175461 containerd[1713]: 2025-11-01 00:23:37.174 [INFO][5816] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.176452 containerd[1713]: time="2025-11-01T00:23:37.175522700Z" level=info msg="TearDown network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" successfully" Nov 1 00:23:37.176452 containerd[1713]: time="2025-11-01T00:23:37.175588901Z" level=info msg="StopPodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" returns successfully" Nov 1 00:23:37.176452 containerd[1713]: time="2025-11-01T00:23:37.176322110Z" level=info msg="RemovePodSandbox for \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" Nov 1 00:23:37.176452 containerd[1713]: time="2025-11-01T00:23:37.176376010Z" level=info msg="Forcibly stopping sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\"" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.209 [WARNING][5837] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"79bd8e75-9a83-49b2-ac1b-70aed374e2d6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-534d15dd10", ContainerID:"f73351e126cfcca2d451ce9f6c040858cdf58f740a50ff1aee26258946809353", Pod:"coredns-66bc5c9577-9d267", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali352cf35b7cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.210 [INFO][5837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.210 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" iface="eth0" netns="" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.210 [INFO][5837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.210 [INFO][5837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.231 [INFO][5844] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.231 [INFO][5844] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.231 [INFO][5844] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.237 [WARNING][5844] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.237 [INFO][5844] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" HandleID="k8s-pod-network.d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Workload="ci--4081.3.6--n--534d15dd10-k8s-coredns--66bc5c9577--9d267-eth0" Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.238 [INFO][5844] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:37.241253 containerd[1713]: 2025-11-01 00:23:37.239 [INFO][5837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b" Nov 1 00:23:37.241253 containerd[1713]: time="2025-11-01T00:23:37.241213415Z" level=info msg="TearDown network for sandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" successfully" Nov 1 00:23:37.249417 containerd[1713]: time="2025-11-01T00:23:37.249371016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 1 00:23:37.249575 containerd[1713]: time="2025-11-01T00:23:37.249434016Z" level=info msg="RemovePodSandbox \"d45f6943eb23ab45c4ae327c73ab891f34e53f8968790372bbfb6e637b757a6b\" returns successfully" Nov 1 00:23:37.344792 containerd[1713]: time="2025-11-01T00:23:37.344601097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:37.347581 containerd[1713]: time="2025-11-01T00:23:37.347506233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:37.347771 containerd[1713]: time="2025-11-01T00:23:37.347553933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:37.347845 kubelet[3194]: E1101 00:23:37.347792 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.348364 kubelet[3194]: E1101 00:23:37.347843 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.348364 kubelet[3194]: E1101 00:23:37.347929 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.348364 kubelet[3194]: E1101 00:23:37.347974 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:40.091473 containerd[1713]: time="2025-11-01T00:23:40.090402148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:40.334889 containerd[1713]: time="2025-11-01T00:23:40.334836079Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.339068 containerd[1713]: time="2025-11-01T00:23:40.339014231Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:40.339197 containerd[1713]: time="2025-11-01T00:23:40.339127332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:40.339450 kubelet[3194]: E1101 00:23:40.339397 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:40.340807 kubelet[3194]: E1101 00:23:40.339452 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:40.340807 kubelet[3194]: E1101 00:23:40.339669 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.340807 kubelet[3194]: E1101 00:23:40.339723 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:40.340993 containerd[1713]: time="2025-11-01T00:23:40.339873642Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:40.577702 containerd[1713]: time="2025-11-01T00:23:40.577647590Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:40.580844 containerd[1713]: time="2025-11-01T00:23:40.580792429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:40.581007 containerd[1713]: time="2025-11-01T00:23:40.580804129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:40.581091 kubelet[3194]: E1101 00:23:40.581044 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:40.581148 kubelet[3194]: E1101 00:23:40.581101 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:40.581213 kubelet[3194]: E1101 00:23:40.581195 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:40.581348 kubelet[3194]: E1101 00:23:40.581238 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:42.091849 containerd[1713]: time="2025-11-01T00:23:42.091568365Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:42.337783 containerd[1713]: time="2025-11-01T00:23:42.337729517Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:42.342709 containerd[1713]: time="2025-11-01T00:23:42.342568877Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:42.342709 containerd[1713]: time="2025-11-01T00:23:42.342663279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:42.343117 kubelet[3194]: E1101 00:23:42.342873 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:42.343117 kubelet[3194]: E1101 00:23:42.342929 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:42.343117 kubelet[3194]: E1101 00:23:42.343219 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:42.344446 containerd[1713]: time="2025-11-01T00:23:42.343463988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:42.579493 containerd[1713]: time="2025-11-01T00:23:42.579416915Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:42.582548 containerd[1713]: time="2025-11-01T00:23:42.582465052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:42.582691 containerd[1713]: time="2025-11-01T00:23:42.582461052Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:42.582971 kubelet[3194]: E1101 00:23:42.582924 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:42.584821 kubelet[3194]: E1101 00:23:42.582985 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:42.584821 kubelet[3194]: E1101 00:23:42.583198 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:42.584821 kubelet[3194]: E1101 00:23:42.583249 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:42.585110 containerd[1713]: time="2025-11-01T00:23:42.583440164Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:42.834504 containerd[1713]: time="2025-11-01T00:23:42.834445977Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:23:42.837337 containerd[1713]: time="2025-11-01T00:23:42.837278512Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:42.837553 containerd[1713]: time="2025-11-01T00:23:42.837309313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:42.837632 kubelet[3194]: E1101 00:23:42.837580 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:42.837754 kubelet[3194]: E1101 00:23:42.837641 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:42.838321 kubelet[3194]: E1101 00:23:42.837859 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:42.838321 kubelet[3194]: E1101 00:23:42.837948 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:23:49.090742 kubelet[3194]: E1101 00:23:49.090058 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:23:49.090742 kubelet[3194]: E1101 00:23:49.090658 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:23:52.091744 kubelet[3194]: E1101 00:23:52.090914 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:23:53.089615 kubelet[3194]: E1101 00:23:53.089553 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:23:56.091242 kubelet[3194]: E1101 00:23:56.091038 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:23:58.092826 kubelet[3194]: E1101 00:23:58.092771 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:24:01.091393 containerd[1713]: time="2025-11-01T00:24:01.091345584Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:01.351802 containerd[1713]: time="2025-11-01T00:24:01.349144589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:01.352813 containerd[1713]: time="2025-11-01T00:24:01.352639135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:01.352813 containerd[1713]: time="2025-11-01T00:24:01.352752536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:01.353383 kubelet[3194]: E1101 00:24:01.353181 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:01.353383 kubelet[3194]: E1101 00:24:01.353254 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:01.354545 kubelet[3194]: E1101 00:24:01.354291 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:01.355599 containerd[1713]: time="2025-11-01T00:24:01.355282770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:01.591021 containerd[1713]: time="2025-11-01T00:24:01.590971683Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:01.594589 containerd[1713]: time="2025-11-01T00:24:01.594511329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:01.595516 containerd[1713]: time="2025-11-01T00:24:01.594554830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:01.595601 kubelet[3194]: E1101 00:24:01.594792 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:01.595601 kubelet[3194]: E1101 00:24:01.594834 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:01.595601 kubelet[3194]: E1101 00:24:01.594901 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:01.595761 kubelet[3194]: E1101 00:24:01.594939 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:24:04.094293 containerd[1713]: time="2025-11-01T00:24:04.094251446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:04.339246 containerd[1713]: time="2025-11-01T00:24:04.339195881Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:04.345965 containerd[1713]: time="2025-11-01T00:24:04.344956157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:04.345965 containerd[1713]: time="2025-11-01T00:24:04.345052858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:04.346549 kubelet[3194]: E1101 00:24:04.346254 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:04.346549 kubelet[3194]: E1101 00:24:04.346341 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:04.347254 kubelet[3194]: E1101 00:24:04.346527 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:04.347254 kubelet[3194]: E1101 00:24:04.346594 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:24:04.347888 containerd[1713]: time="2025-11-01T00:24:04.347605592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:04.605730 containerd[1713]: time="2025-11-01T00:24:04.605590199Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:04.608473 containerd[1713]: time="2025-11-01T00:24:04.608257135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:04.608473 containerd[1713]: time="2025-11-01T00:24:04.608357236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:04.608840 kubelet[3194]: E1101 00:24:04.608675 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:04.608840 kubelet[3194]: E1101 00:24:04.608752 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:04.608988 kubelet[3194]: E1101 00:24:04.608871 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:04.608988 kubelet[3194]: E1101 00:24:04.608916 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:24:06.092080 containerd[1713]: time="2025-11-01T00:24:06.092028932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:06.334026 containerd[1713]: time="2025-11-01T00:24:06.333957328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:06.336866 containerd[1713]: time="2025-11-01T00:24:06.336587262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:06.336866 containerd[1713]: time="2025-11-01T00:24:06.336685664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:06.337241 kubelet[3194]: E1101 00:24:06.337196 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.337642 kubelet[3194]: E1101 00:24:06.337254 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.337642 kubelet[3194]: E1101 00:24:06.337340 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:06.337642 kubelet[3194]: E1101 00:24:06.337387 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:24:10.093573 containerd[1713]: time="2025-11-01T00:24:10.093517046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:10.342861 containerd[1713]: time="2025-11-01T00:24:10.342804002Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:10.346402 containerd[1713]: time="2025-11-01T00:24:10.345789736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:10.346402 containerd[1713]: time="2025-11-01T00:24:10.345832036Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:10.346636 kubelet[3194]: E1101 00:24:10.346117 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:10.346636 kubelet[3194]: E1101 00:24:10.346189 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:10.346636 kubelet[3194]: E1101 00:24:10.346275 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:10.346636 kubelet[3194]: E1101 00:24:10.346313 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:24:13.091012 containerd[1713]: time="2025-11-01T00:24:13.090969784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:13.339669 containerd[1713]: time="2025-11-01T00:24:13.339453930Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:13.343890 containerd[1713]: time="2025-11-01T00:24:13.343644678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:13.344112 containerd[1713]: time="2025-11-01T00:24:13.343690579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:13.344464 kubelet[3194]: E1101 00:24:13.344422 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:13.344916 kubelet[3194]: E1101 00:24:13.344471 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:13.344916 kubelet[3194]: E1101 00:24:13.344574 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:13.347341 containerd[1713]: time="2025-11-01T00:24:13.347302020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:13.591074 containerd[1713]: time="2025-11-01T00:24:13.590861210Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:13.594632 containerd[1713]: time="2025-11-01T00:24:13.594397351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:13.594632 containerd[1713]: time="2025-11-01T00:24:13.594437351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:13.594788 kubelet[3194]: E1101 00:24:13.594704 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:13.594788 kubelet[3194]: E1101 00:24:13.594757 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:13.594934 kubelet[3194]: E1101 00:24:13.594841 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:13.594934 kubelet[3194]: E1101 00:24:13.594893 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:24:14.092896 kubelet[3194]: E1101 00:24:14.092818 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:24:16.094380 kubelet[3194]: E1101 00:24:16.094051 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:24:16.297366 update_engine[1695]: I20251101 00:24:16.296792 1695 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 1 00:24:16.297366 update_engine[1695]: I20251101 00:24:16.296850 1695 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 1 00:24:16.297366 update_engine[1695]: I20251101 00:24:16.297047 1695 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.297922 1695 omaha_request_params.cc:62] Current group set to lts Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298054 1695 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298065 1695 update_attempter.cc:643] Scheduling an action processor start. Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298084 1695 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298120 1695 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298192 1695 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298202 1695 omaha_request_action.cc:272] Request: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: Nov 1 00:24:16.298893 update_engine[1695]: I20251101 00:24:16.298213 1695 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:24:16.301993 update_engine[1695]: I20251101 00:24:16.301378 1695 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:24:16.301993 update_engine[1695]: I20251101 00:24:16.301779 1695 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:24:16.302150 locksmithd[1800]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 1 00:24:16.322381 update_engine[1695]: E20251101 00:24:16.322233 1695 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:24:16.322381 update_engine[1695]: I20251101 00:24:16.322331 1695 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 1 00:24:18.090207 kubelet[3194]: E1101 00:24:18.090158 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:24:20.094757 kubelet[3194]: E1101 00:24:20.094712 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:24:23.090218 kubelet[3194]: E1101 00:24:23.089880 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:24:25.091329 kubelet[3194]: E1101 00:24:25.090850 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:24:25.091329 kubelet[3194]: E1101 00:24:25.091032 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:24:26.259381 update_engine[1695]: I20251101 00:24:26.259312 1695 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:24:26.259885 update_engine[1695]: I20251101 00:24:26.259618 1695 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:24:26.259885 update_engine[1695]: I20251101 00:24:26.259869 1695 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:24:26.295591 update_engine[1695]: E20251101 00:24:26.295486 1695 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:24:26.295991 update_engine[1695]: I20251101 00:24:26.295754 1695 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 1 00:24:31.090333 kubelet[3194]: E1101 00:24:31.090283 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:24:31.091514 kubelet[3194]: E1101 00:24:31.090717 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:24:33.090121 kubelet[3194]: E1101 00:24:33.090065 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:24:35.089699 kubelet[3194]: E1101 00:24:35.089510 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:24:36.258368 update_engine[1695]: I20251101 00:24:36.257738 1695 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:24:36.258368 update_engine[1695]: I20251101 00:24:36.258026 1695 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:24:36.258368 update_engine[1695]: I20251101 00:24:36.258273 1695 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:24:36.284313 update_engine[1695]: E20251101 00:24:36.284160 1695 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:24:36.284313 update_engine[1695]: I20251101 00:24:36.284264 1695 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 1 00:24:39.091757 kubelet[3194]: E1101 00:24:39.091666 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:24:40.093942 kubelet[3194]: E1101 00:24:40.093828 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:24:44.093081 kubelet[3194]: E1101 00:24:44.092613 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:24:45.091059 kubelet[3194]: E1101 00:24:45.090173 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:24:45.091368 containerd[1713]: time="2025-11-01T00:24:45.090796375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:45.333064 containerd[1713]: time="2025-11-01T00:24:45.333005868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:45.335884 containerd[1713]: time="2025-11-01T00:24:45.335825905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:45.336107 containerd[1713]: time="2025-11-01T00:24:45.335858006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:45.336187 kubelet[3194]: E1101 00:24:45.336119 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:45.336798 kubelet[3194]: E1101 00:24:45.336182 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:45.336798 kubelet[3194]: E1101 00:24:45.336287 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:45.336798 kubelet[3194]: E1101 00:24:45.336348 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:24:46.258654 update_engine[1695]: I20251101 00:24:46.258579 1695 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:24:46.259659 update_engine[1695]: I20251101 00:24:46.258867 1695 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:24:46.259659 update_engine[1695]: I20251101 00:24:46.259117 1695 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:24:46.321250 update_engine[1695]: E20251101 00:24:46.320959 1695 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:24:46.321250 update_engine[1695]: I20251101 00:24:46.321061 1695 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 00:24:46.321250 update_engine[1695]: I20251101 00:24:46.321074 1695 omaha_request_action.cc:617] Omaha request response: Nov 1 00:24:46.321250 update_engine[1695]: E20251101 00:24:46.321187 1695 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321580 1695 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321609 1695 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321617 1695 update_attempter.cc:306] Processing Done. Nov 1 00:24:46.321990 update_engine[1695]: E20251101 00:24:46.321746 1695 update_attempter.cc:619] Update failed. Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321763 1695 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321770 1695 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 1 00:24:46.321990 update_engine[1695]: I20251101 00:24:46.321779 1695 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 1 00:24:46.323185 update_engine[1695]: I20251101 00:24:46.322089 1695 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 1 00:24:46.323185 update_engine[1695]: I20251101 00:24:46.322152 1695 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 1 00:24:46.323185 update_engine[1695]: I20251101 00:24:46.322162 1695 omaha_request_action.cc:272] Request: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: Nov 1 00:24:46.323185 update_engine[1695]: I20251101 00:24:46.322171 1695 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 1 00:24:46.323958 update_engine[1695]: I20251101 00:24:46.323108 1695 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 1 00:24:46.324148 update_engine[1695]: I20251101 00:24:46.324095 1695 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 1 00:24:46.325006 locksmithd[1800]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 1 00:24:46.344635 update_engine[1695]: E20251101 00:24:46.344521 1695 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344879 1695 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344907 1695 omaha_request_action.cc:617] Omaha request response: Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344919 1695 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344926 1695 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344933 1695 update_attempter.cc:306] Processing Done. Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344941 1695 update_attempter.cc:310] Error event sent. Nov 1 00:24:46.345086 update_engine[1695]: I20251101 00:24:46.344957 1695 update_check_scheduler.cc:74] Next update check in 46m57s Nov 1 00:24:46.346656 locksmithd[1800]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 1 00:24:49.358346 systemd[1]: run-containerd-runc-k8s.io-dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77-runc.rz0sJs.mount: Deactivated successfully. Nov 1 00:24:50.091698 kubelet[3194]: E1101 00:24:50.091656 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:24:51.090423 containerd[1713]: time="2025-11-01T00:24:51.090372893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:51.336721 containerd[1713]: time="2025-11-01T00:24:51.336485356Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:51.339510 containerd[1713]: time="2025-11-01T00:24:51.339444193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:51.339677 containerd[1713]: time="2025-11-01T00:24:51.339523794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:51.340168 kubelet[3194]: E1101 00:24:51.340121 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:51.340623 kubelet[3194]: E1101 00:24:51.340195 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:51.340623 kubelet[3194]: E1101 00:24:51.340302 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:51.343351 containerd[1713]: time="2025-11-01T00:24:51.343225840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:51.588712 containerd[1713]: time="2025-11-01T00:24:51.588659895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:51.592129 containerd[1713]: time="2025-11-01T00:24:51.591834134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:51.592129 containerd[1713]: time="2025-11-01T00:24:51.591949536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:51.592293 kubelet[3194]: E1101 00:24:51.592125 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:51.592293 kubelet[3194]: E1101 00:24:51.592180 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:51.592293 kubelet[3194]: E1101 00:24:51.592274 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5c6f5f86c9-hxs55_calico-system(729ae25b-84a0-42aa-9bbf-32506f51f3c1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:51.592430 kubelet[3194]: E1101 00:24:51.592327 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:24:53.111177 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:57126.service - OpenSSH per-connection server daemon (10.200.16.10:57126). Nov 1 00:24:53.750634 sshd[5956]: Accepted publickey for core from 10.200.16.10 port 57126 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:24:53.751831 sshd[5956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:53.761339 systemd-logind[1692]: New session 10 of user core. Nov 1 00:24:53.769690 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:24:54.097220 containerd[1713]: time="2025-11-01T00:24:54.096937214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:54.312336 sshd[5956]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:54.320002 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:57126.service: Deactivated successfully. Nov 1 00:24:54.321764 systemd-logind[1692]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:24:54.323469 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:24:54.328770 systemd-logind[1692]: Removed session 10. Nov 1 00:24:54.343211 containerd[1713]: time="2025-11-01T00:24:54.343164779Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:54.346425 containerd[1713]: time="2025-11-01T00:24:54.346372919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:54.346523 containerd[1713]: time="2025-11-01T00:24:54.346476820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:54.346757 kubelet[3194]: E1101 00:24:54.346714 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:54.347146 kubelet[3194]: E1101 00:24:54.346775 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:54.347146 kubelet[3194]: E1101 00:24:54.346875 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:54.348860 containerd[1713]: time="2025-11-01T00:24:54.348755248Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:54.600675 containerd[1713]: time="2025-11-01T00:24:54.600410981Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:54.604158 containerd[1713]: time="2025-11-01T00:24:54.604021026Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:54.604158 containerd[1713]: time="2025-11-01T00:24:54.604086626Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:54.605571 kubelet[3194]: E1101 00:24:54.604824 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:54.605571 kubelet[3194]: E1101 00:24:54.604885 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:54.605571 kubelet[3194]: E1101 00:24:54.604974 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-trnvf_calico-system(763cf2c8-d06c-456e-8d46-4720620695a1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:54.605826 kubelet[3194]: E1101 00:24:54.605073 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:24:56.093774 kubelet[3194]: E1101 00:24:56.091779 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:24:56.094413 containerd[1713]: time="2025-11-01T00:24:56.091999746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:56.353385 containerd[1713]: time="2025-11-01T00:24:56.352984694Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:56.356059 containerd[1713]: time="2025-11-01T00:24:56.356006332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:56.356180 containerd[1713]: time="2025-11-01T00:24:56.355986831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:56.356495 kubelet[3194]: E1101 00:24:56.356422 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:56.356495 kubelet[3194]: E1101 00:24:56.356490 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:56.356875 kubelet[3194]: E1101 00:24:56.356609 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-4kvfs_calico-apiserver(1e7f5e79-08c7-4630-a4c4-82d9824187a0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:56.356875 kubelet[3194]: E1101 00:24:56.356657 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:24:58.092557 containerd[1713]: time="2025-11-01T00:24:58.092173619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:58.334296 containerd[1713]: time="2025-11-01T00:24:58.334247285Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:24:58.337652 containerd[1713]: time="2025-11-01T00:24:58.337589828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:58.338027 containerd[1713]: time="2025-11-01T00:24:58.337824331Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:58.338217 kubelet[3194]: E1101 00:24:58.338163 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:58.340689 kubelet[3194]: E1101 00:24:58.338233 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:58.340689 kubelet[3194]: E1101 00:24:58.338365 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6f9c5c4598-f295m_calico-apiserver(d8da81c8-f689-4aff-8f06-3115f31a2434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:58.340689 kubelet[3194]: E1101 00:24:58.338848 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:24:59.429915 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:57132.service - OpenSSH per-connection server daemon (10.200.16.10:57132). Nov 1 00:25:00.056862 sshd[5993]: Accepted publickey for core from 10.200.16.10 port 57132 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:00.058741 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:00.063650 systemd-logind[1692]: New session 11 of user core. Nov 1 00:25:00.071159 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:25:00.619786 sshd[5993]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:00.625408 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:57132.service: Deactivated successfully. Nov 1 00:25:00.626235 systemd-logind[1692]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:25:00.630131 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:25:00.632021 systemd-logind[1692]: Removed session 11. Nov 1 00:25:02.092051 containerd[1713]: time="2025-11-01T00:25:02.092006178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:25:02.460424 containerd[1713]: time="2025-11-01T00:25:02.460278042Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:25:02.463659 containerd[1713]: time="2025-11-01T00:25:02.463605485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:25:02.463791 containerd[1713]: time="2025-11-01T00:25:02.463715686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:25:02.464011 kubelet[3194]: E1101 00:25:02.463962 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:02.464385 kubelet[3194]: E1101 00:25:02.464023 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:25:02.464385 kubelet[3194]: E1101 00:25:02.464109 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-f7h6c_calico-system(389b7b2a-9963-4ce4-a0c8-a7f3fe88a917): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:25:02.464385 kubelet[3194]: E1101 00:25:02.464150 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:25:05.091698 kubelet[3194]: E1101 00:25:05.091647 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:25:05.735267 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:46912.service - OpenSSH per-connection server daemon (10.200.16.10:46912). Nov 1 00:25:06.098578 kubelet[3194]: E1101 00:25:06.096877 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:25:06.369889 sshd[6007]: Accepted publickey for core from 10.200.16.10 port 46912 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:06.372435 sshd[6007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:06.379423 systemd-logind[1692]: New session 12 of user core. Nov 1 00:25:06.383941 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:25:06.961548 sshd[6007]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:06.965644 systemd-logind[1692]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:25:06.969647 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:46912.service: Deactivated successfully. Nov 1 00:25:06.973193 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:25:06.975189 systemd-logind[1692]: Removed session 12. Nov 1 00:25:07.084986 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:46918.service - OpenSSH per-connection server daemon (10.200.16.10:46918). Nov 1 00:25:07.710773 sshd[6021]: Accepted publickey for core from 10.200.16.10 port 46918 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:07.712353 sshd[6021]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:07.716591 systemd-logind[1692]: New session 13 of user core. Nov 1 00:25:07.721688 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:25:08.096439 kubelet[3194]: E1101 00:25:08.095348 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:25:08.311861 sshd[6021]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:08.315166 systemd-logind[1692]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:25:08.318047 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:46918.service: Deactivated successfully. Nov 1 00:25:08.322385 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:25:08.324977 systemd-logind[1692]: Removed session 13. Nov 1 00:25:08.428849 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:46924.service - OpenSSH per-connection server daemon (10.200.16.10:46924). Nov 1 00:25:09.067094 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 46924 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:09.068286 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:09.073666 systemd-logind[1692]: New session 14 of user core. Nov 1 00:25:09.080876 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:25:09.091965 kubelet[3194]: E1101 00:25:09.091891 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:25:09.658094 sshd[6032]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:09.662753 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:46924.service: Deactivated successfully. Nov 1 00:25:09.666347 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:25:09.671216 systemd-logind[1692]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:25:09.673300 systemd-logind[1692]: Removed session 14. Nov 1 00:25:13.090749 kubelet[3194]: E1101 00:25:13.090696 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:25:14.778752 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:38664.service - OpenSSH per-connection server daemon (10.200.16.10:38664). Nov 1 00:25:15.410843 sshd[6047]: Accepted publickey for core from 10.200.16.10 port 38664 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:15.411810 sshd[6047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:15.418264 systemd-logind[1692]: New session 15 of user core. Nov 1 00:25:15.424895 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:25:15.926665 sshd[6047]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:15.930426 systemd-logind[1692]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:25:15.931289 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:38664.service: Deactivated successfully. Nov 1 00:25:15.933659 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:25:15.934530 systemd-logind[1692]: Removed session 15. Nov 1 00:25:16.096053 kubelet[3194]: E1101 00:25:16.095202 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:25:18.094491 kubelet[3194]: E1101 00:25:18.094427 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:25:19.089640 kubelet[3194]: E1101 00:25:19.089581 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:25:20.092977 kubelet[3194]: E1101 00:25:20.092931 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:25:21.046649 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:52844.service - OpenSSH per-connection server daemon (10.200.16.10:52844). Nov 1 00:25:21.686363 sshd[6086]: Accepted publickey for core from 10.200.16.10 port 52844 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:21.688606 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:21.695513 systemd-logind[1692]: New session 16 of user core. Nov 1 00:25:21.700788 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:25:22.094411 kubelet[3194]: E1101 00:25:22.092193 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:25:22.219414 sshd[6086]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:22.223709 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:52844.service: Deactivated successfully. Nov 1 00:25:22.228130 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:25:22.229481 systemd-logind[1692]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:25:22.231224 systemd-logind[1692]: Removed session 16. Nov 1 00:25:24.094451 kubelet[3194]: E1101 00:25:24.094399 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:25:27.089557 kubelet[3194]: E1101 00:25:27.089472 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:25:27.344464 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:52858.service - OpenSSH per-connection server daemon (10.200.16.10:52858). Nov 1 00:25:27.983774 sshd[6099]: Accepted publickey for core from 10.200.16.10 port 52858 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:27.987652 sshd[6099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:27.993608 systemd-logind[1692]: New session 17 of user core. Nov 1 00:25:28.004160 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:25:28.492083 sshd[6099]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:28.495986 systemd-logind[1692]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:25:28.496570 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:52858.service: Deactivated successfully. Nov 1 00:25:28.498726 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:25:28.500086 systemd-logind[1692]: Removed session 17. Nov 1 00:25:30.092614 kubelet[3194]: E1101 00:25:30.091554 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:25:31.091620 kubelet[3194]: E1101 00:25:31.091500 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:25:31.092099 kubelet[3194]: E1101 00:25:31.092030 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:25:33.604044 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:56124.service - OpenSSH per-connection server daemon (10.200.16.10:56124). Nov 1 00:25:34.244542 sshd[6112]: Accepted publickey for core from 10.200.16.10 port 56124 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:34.247573 sshd[6112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:34.255660 systemd-logind[1692]: New session 18 of user core. Nov 1 00:25:34.264961 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:25:34.779763 sshd[6112]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:34.783490 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:56124.service: Deactivated successfully. Nov 1 00:25:34.785883 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:25:34.787598 systemd-logind[1692]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:25:34.789152 systemd-logind[1692]: Removed session 18. Nov 1 00:25:34.892082 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:56126.service - OpenSSH per-connection server daemon (10.200.16.10:56126). Nov 1 00:25:35.089769 kubelet[3194]: E1101 00:25:35.089316 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:25:35.520641 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 56126 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:35.523887 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:35.532007 systemd-logind[1692]: New session 19 of user core. Nov 1 00:25:35.539812 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:25:36.126887 sshd[6125]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:36.132450 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:56126.service: Deactivated successfully. Nov 1 00:25:36.137818 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:25:36.139317 systemd-logind[1692]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:25:36.141747 systemd-logind[1692]: Removed session 19. Nov 1 00:25:36.245346 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:56130.service - OpenSSH per-connection server daemon (10.200.16.10:56130). Nov 1 00:25:36.880836 sshd[6138]: Accepted publickey for core from 10.200.16.10 port 56130 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:36.883489 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:36.891010 systemd-logind[1692]: New session 20 of user core. Nov 1 00:25:36.899239 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:25:37.090491 kubelet[3194]: E1101 00:25:37.090259 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:25:37.984550 sshd[6138]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:37.988476 systemd-logind[1692]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:25:37.990806 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:56130.service: Deactivated successfully. Nov 1 00:25:37.994343 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:25:37.995803 systemd-logind[1692]: Removed session 20. Nov 1 00:25:38.106597 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:56138.service - OpenSSH per-connection server daemon (10.200.16.10:56138). Nov 1 00:25:38.736399 sshd[6155]: Accepted publickey for core from 10.200.16.10 port 56138 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:38.739943 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:38.747218 systemd-logind[1692]: New session 21 of user core. Nov 1 00:25:38.753096 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:25:39.503562 sshd[6155]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:39.507495 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:56138.service: Deactivated successfully. Nov 1 00:25:39.511685 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:25:39.514236 systemd-logind[1692]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:25:39.515474 systemd-logind[1692]: Removed session 21. Nov 1 00:25:39.611211 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:56148.service - OpenSSH per-connection server daemon (10.200.16.10:56148). Nov 1 00:25:40.247600 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 56148 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:40.250925 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:40.262298 systemd-logind[1692]: New session 22 of user core. Nov 1 00:25:40.266732 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:25:40.838768 sshd[6168]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:40.843439 systemd-logind[1692]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:25:40.843962 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:56148.service: Deactivated successfully. Nov 1 00:25:40.848307 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:25:40.852989 systemd-logind[1692]: Removed session 22. Nov 1 00:25:41.090180 kubelet[3194]: E1101 00:25:41.090057 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:25:42.093210 kubelet[3194]: E1101 00:25:42.092610 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:25:42.094554 kubelet[3194]: E1101 00:25:42.094035 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:25:44.093469 kubelet[3194]: E1101 00:25:44.093408 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:25:45.949802 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:53852.service - OpenSSH per-connection server daemon (10.200.16.10:53852). Nov 1 00:25:46.589634 sshd[6185]: Accepted publickey for core from 10.200.16.10 port 53852 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:46.590899 sshd[6185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:46.601110 systemd-logind[1692]: New session 23 of user core. Nov 1 00:25:46.604283 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:25:47.135682 sshd[6185]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:47.144349 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:53852.service: Deactivated successfully. Nov 1 00:25:47.150814 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:25:47.153761 systemd-logind[1692]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:25:47.154970 systemd-logind[1692]: Removed session 23. Nov 1 00:25:48.091495 kubelet[3194]: E1101 00:25:48.091441 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:25:49.344797 systemd[1]: run-containerd-runc-k8s.io-dc8baf958d5a6d486f61670b2c265bdb1dbcfabef0b5a2309384f0480e4c1e77-runc.Qoyx2D.mount: Deactivated successfully. Nov 1 00:25:50.094262 kubelet[3194]: E1101 00:25:50.094206 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:25:52.251270 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:41304.service - OpenSSH per-connection server daemon (10.200.16.10:41304). Nov 1 00:25:52.872710 sshd[6220]: Accepted publickey for core from 10.200.16.10 port 41304 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:52.874222 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:52.880010 systemd-logind[1692]: New session 24 of user core. Nov 1 00:25:52.891736 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:25:53.090226 kubelet[3194]: E1101 00:25:53.089951 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:25:53.410307 sshd[6220]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:53.415765 systemd-logind[1692]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:25:53.417004 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:41304.service: Deactivated successfully. Nov 1 00:25:53.421166 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:25:53.424398 systemd-logind[1692]: Removed session 24. Nov 1 00:25:54.089920 kubelet[3194]: E1101 00:25:54.089808 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:25:55.092760 kubelet[3194]: E1101 00:25:55.092458 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:25:55.094192 kubelet[3194]: E1101 00:25:55.094155 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:25:58.537069 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:41320.service - OpenSSH per-connection server daemon (10.200.16.10:41320). Nov 1 00:25:59.174579 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 41320 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:25:59.176936 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:25:59.187216 systemd-logind[1692]: New session 25 of user core. Nov 1 00:25:59.190965 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:25:59.717922 sshd[6233]: pam_unix(sshd:session): session closed for user core Nov 1 00:25:59.722248 systemd-logind[1692]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:25:59.723038 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:41320.service: Deactivated successfully. Nov 1 00:25:59.726676 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:25:59.730808 systemd-logind[1692]: Removed session 25. Nov 1 00:26:02.089774 kubelet[3194]: E1101 00:26:02.089321 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434" Nov 1 00:26:03.092583 kubelet[3194]: E1101 00:26:03.091233 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-4kvfs" podUID="1e7f5e79-08c7-4630-a4c4-82d9824187a0" Nov 1 00:26:04.840885 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:53952.service - OpenSSH per-connection server daemon (10.200.16.10:53952). Nov 1 00:26:05.486577 sshd[6252]: Accepted publickey for core from 10.200.16.10 port 53952 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:26:05.488790 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:05.496703 systemd-logind[1692]: New session 26 of user core. Nov 1 00:26:05.501694 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:26:06.058687 sshd[6252]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:06.063759 systemd-logind[1692]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:26:06.065323 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:53952.service: Deactivated successfully. Nov 1 00:26:06.068314 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:26:06.070209 systemd-logind[1692]: Removed session 26. Nov 1 00:26:07.090522 kubelet[3194]: E1101 00:26:07.090466 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5c6f5f86c9-hxs55" podUID="729ae25b-84a0-42aa-9bbf-32506f51f3c1" Nov 1 00:26:08.092372 containerd[1713]: time="2025-11-01T00:26:08.091414418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:26:08.354692 containerd[1713]: time="2025-11-01T00:26:08.354492215Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Nov 1 00:26:08.366671 containerd[1713]: time="2025-11-01T00:26:08.366435465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:26:08.366671 containerd[1713]: time="2025-11-01T00:26:08.366572367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:26:08.367203 kubelet[3194]: E1101 00:26:08.367097 3194 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:08.367749 kubelet[3194]: E1101 00:26:08.367215 3194 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:26:08.368439 kubelet[3194]: E1101 00:26:08.368337 3194 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-d9dc766d8-sj8dp_calico-system(fcbbf525-3d8d-4b5d-819a-2cf75639fa8a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:26:08.368439 kubelet[3194]: E1101 00:26:08.368397 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-d9dc766d8-sj8dp" podUID="fcbbf525-3d8d-4b5d-819a-2cf75639fa8a" Nov 1 00:26:09.090425 kubelet[3194]: E1101 00:26:09.090375 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-f7h6c" podUID="389b7b2a-9963-4ce4-a0c8-a7f3fe88a917" Nov 1 00:26:09.091113 kubelet[3194]: E1101 00:26:09.091065 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-trnvf" podUID="763cf2c8-d06c-456e-8d46-4720620695a1" Nov 1 00:26:11.173527 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:41314.service - OpenSSH per-connection server daemon (10.200.16.10:41314). Nov 1 00:26:11.808576 sshd[6269]: Accepted publickey for core from 10.200.16.10 port 41314 ssh2: RSA SHA256:4Mlk2155aZYBTfHdK8aj/hVY9PtYtx0s3kqi60O27VY Nov 1 00:26:11.809587 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:26:11.819861 systemd-logind[1692]: New session 27 of user core. Nov 1 00:26:11.823721 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:26:12.419848 sshd[6269]: pam_unix(sshd:session): session closed for user core Nov 1 00:26:12.425020 systemd-logind[1692]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:26:12.425920 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:41314.service: Deactivated successfully. Nov 1 00:26:12.430314 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:26:12.433659 systemd-logind[1692]: Removed session 27. Nov 1 00:26:14.091990 kubelet[3194]: E1101 00:26:14.091878 3194 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6f9c5c4598-f295m" podUID="d8da81c8-f689-4aff-8f06-3115f31a2434"