Oct 31 14:02:56.082660 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 12:16:40 -00 2025 Oct 31 14:02:56.082705 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:02:56.082715 kernel: BIOS-provided physical RAM map: Oct 31 14:02:56.082728 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 14:02:56.082735 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 14:02:56.082742 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 14:02:56.082750 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 14:02:56.082757 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 14:02:56.082767 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 31 14:02:56.082774 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 31 14:02:56.082782 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 31 14:02:56.082791 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 31 14:02:56.082798 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 31 14:02:56.082805 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 31 14:02:56.082814 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 31 14:02:56.082822 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 14:02:56.082835 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 31 14:02:56.082842 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 31 14:02:56.082867 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 31 14:02:56.082878 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 31 14:02:56.082889 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 31 14:02:56.082898 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 14:02:56.082906 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 31 14:02:56.082913 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 14:02:56.082921 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 31 14:02:56.082930 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 14:02:56.082948 kernel: NX (Execute Disable) protection: active Oct 31 14:02:56.082959 kernel: APIC: Static calls initialized Oct 31 14:02:56.082969 kernel: e820: update [mem 0x9b320018-0x9b329c57] usable ==> usable Oct 31 14:02:56.082979 kernel: e820: update [mem 0x9b2e3018-0x9b31fe57] usable ==> usable Oct 31 14:02:56.082989 kernel: extended physical RAM map: Oct 31 14:02:56.082998 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 31 14:02:56.083006 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 31 14:02:56.083014 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 31 14:02:56.083021 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 31 14:02:56.083029 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 31 14:02:56.083036 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 31 14:02:56.083047 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 31 14:02:56.083055 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2e3017] usable Oct 31 14:02:56.083063 kernel: reserve setup_data: [mem 0x000000009b2e3018-0x000000009b31fe57] usable Oct 31 14:02:56.083074 kernel: reserve setup_data: [mem 0x000000009b31fe58-0x000000009b320017] usable Oct 31 14:02:56.083084 kernel: reserve setup_data: [mem 0x000000009b320018-0x000000009b329c57] usable Oct 31 14:02:56.083092 kernel: reserve setup_data: [mem 0x000000009b329c58-0x000000009bd3efff] usable Oct 31 14:02:56.083100 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 31 14:02:56.083108 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 31 14:02:56.083116 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 31 14:02:56.083124 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 31 14:02:56.083132 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 31 14:02:56.083140 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 31 14:02:56.083147 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 31 14:02:56.083158 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 31 14:02:56.083165 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 31 14:02:56.083173 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 31 14:02:56.083181 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 31 14:02:56.083189 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 31 14:02:56.083197 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 31 14:02:56.083207 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 31 14:02:56.083217 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 31 14:02:56.083230 kernel: efi: EFI v2.7 by EDK II Oct 31 14:02:56.083238 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 31 14:02:56.083246 kernel: random: crng init done Oct 31 14:02:56.083260 kernel: efi: Remove mem151: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 31 14:02:56.083281 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 31 14:02:56.083294 kernel: secureboot: Secure boot disabled Oct 31 14:02:56.083303 kernel: SMBIOS 2.8 present. Oct 31 14:02:56.083311 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 31 14:02:56.083319 kernel: DMI: Memory slots populated: 1/1 Oct 31 14:02:56.083327 kernel: Hypervisor detected: KVM Oct 31 14:02:56.083335 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 31 14:02:56.083343 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 31 14:02:56.083351 kernel: kvm-clock: using sched offset of 4818121343 cycles Oct 31 14:02:56.083360 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 31 14:02:56.083372 kernel: tsc: Detected 2794.748 MHz processor Oct 31 14:02:56.083381 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 31 14:02:56.083389 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 31 14:02:56.083397 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 31 14:02:56.083406 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 31 14:02:56.083414 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 31 14:02:56.083423 kernel: Using GB pages for direct mapping Oct 31 14:02:56.083433 kernel: ACPI: Early table checksum verification disabled Oct 31 14:02:56.083442 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 31 14:02:56.083451 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 31 14:02:56.083459 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083468 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083476 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 31 14:02:56.083484 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083495 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083503 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083512 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 31 14:02:56.083520 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 31 14:02:56.083529 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 31 14:02:56.083537 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 31 14:02:56.083545 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 31 14:02:56.083556 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 31 14:02:56.083564 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 31 14:02:56.083572 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 31 14:02:56.083581 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 31 14:02:56.083589 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 31 14:02:56.083597 kernel: No NUMA configuration found Oct 31 14:02:56.083605 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 31 14:02:56.083613 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 31 14:02:56.083624 kernel: Zone ranges: Oct 31 14:02:56.083633 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 31 14:02:56.083641 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 31 14:02:56.083649 kernel: Normal empty Oct 31 14:02:56.083658 kernel: Device empty Oct 31 14:02:56.083666 kernel: Movable zone start for each node Oct 31 14:02:56.083674 kernel: Early memory node ranges Oct 31 14:02:56.083684 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 31 14:02:56.083695 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 31 14:02:56.083704 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 31 14:02:56.083712 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 31 14:02:56.083720 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 31 14:02:56.083728 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 31 14:02:56.083736 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 31 14:02:56.083745 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 31 14:02:56.083757 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 31 14:02:56.083766 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 14:02:56.083781 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 31 14:02:56.083792 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 31 14:02:56.083800 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 31 14:02:56.083808 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 31 14:02:56.083817 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 31 14:02:56.083826 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 31 14:02:56.083834 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 31 14:02:56.083845 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 31 14:02:56.083873 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 31 14:02:56.083884 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 31 14:02:56.083893 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 31 14:02:56.083905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 31 14:02:56.083914 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 31 14:02:56.083922 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 31 14:02:56.083931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 31 14:02:56.083940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 31 14:02:56.083948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 31 14:02:56.083957 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 31 14:02:56.083968 kernel: TSC deadline timer available Oct 31 14:02:56.083976 kernel: CPU topo: Max. logical packages: 1 Oct 31 14:02:56.083985 kernel: CPU topo: Max. logical dies: 1 Oct 31 14:02:56.083993 kernel: CPU topo: Max. dies per package: 1 Oct 31 14:02:56.084001 kernel: CPU topo: Max. threads per core: 1 Oct 31 14:02:56.084010 kernel: CPU topo: Num. cores per package: 4 Oct 31 14:02:56.084018 kernel: CPU topo: Num. threads per package: 4 Oct 31 14:02:56.084027 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 31 14:02:56.084038 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 31 14:02:56.084046 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 31 14:02:56.084054 kernel: kvm-guest: setup PV sched yield Oct 31 14:02:56.084063 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 31 14:02:56.084072 kernel: Booting paravirtualized kernel on KVM Oct 31 14:02:56.084081 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 31 14:02:56.084090 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 31 14:02:56.084098 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 31 14:02:56.084109 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 31 14:02:56.084118 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 31 14:02:56.084126 kernel: kvm-guest: PV spinlocks enabled Oct 31 14:02:56.084135 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 31 14:02:56.084148 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:02:56.084158 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 31 14:02:56.084169 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 31 14:02:56.084178 kernel: Fallback order for Node 0: 0 Oct 31 14:02:56.084186 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 31 14:02:56.084195 kernel: Policy zone: DMA32 Oct 31 14:02:56.084204 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 31 14:02:56.084212 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 31 14:02:56.084221 kernel: ftrace: allocating 40092 entries in 157 pages Oct 31 14:02:56.084232 kernel: ftrace: allocated 157 pages with 5 groups Oct 31 14:02:56.084241 kernel: Dynamic Preempt: voluntary Oct 31 14:02:56.084249 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 31 14:02:56.084259 kernel: rcu: RCU event tracing is enabled. Oct 31 14:02:56.084276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 31 14:02:56.084285 kernel: Trampoline variant of Tasks RCU enabled. Oct 31 14:02:56.084293 kernel: Rude variant of Tasks RCU enabled. Oct 31 14:02:56.084302 kernel: Tracing variant of Tasks RCU enabled. Oct 31 14:02:56.084313 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 31 14:02:56.084321 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 31 14:02:56.084332 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 14:02:56.084341 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 14:02:56.084350 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 31 14:02:56.084359 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 31 14:02:56.084367 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 31 14:02:56.084378 kernel: Console: colour dummy device 80x25 Oct 31 14:02:56.084387 kernel: printk: legacy console [ttyS0] enabled Oct 31 14:02:56.084395 kernel: ACPI: Core revision 20240827 Oct 31 14:02:56.084404 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 31 14:02:56.084413 kernel: APIC: Switch to symmetric I/O mode setup Oct 31 14:02:56.084421 kernel: x2apic enabled Oct 31 14:02:56.084430 kernel: APIC: Switched APIC routing to: physical x2apic Oct 31 14:02:56.084440 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 31 14:02:56.084449 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 31 14:02:56.084458 kernel: kvm-guest: setup PV IPIs Oct 31 14:02:56.084466 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 31 14:02:56.084475 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 31 14:02:56.084484 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 31 14:02:56.084493 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 31 14:02:56.084504 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 31 14:02:56.084512 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 31 14:02:56.084521 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 31 14:02:56.084530 kernel: Spectre V2 : Mitigation: Retpolines Oct 31 14:02:56.084538 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 31 14:02:56.084547 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 31 14:02:56.084563 kernel: active return thunk: retbleed_return_thunk Oct 31 14:02:56.084582 kernel: RETBleed: Mitigation: untrained return thunk Oct 31 14:02:56.084600 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 31 14:02:56.084612 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 31 14:02:56.084622 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 31 14:02:56.084631 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 31 14:02:56.084640 kernel: active return thunk: srso_return_thunk Oct 31 14:02:56.084649 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 31 14:02:56.084661 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 31 14:02:56.084669 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 31 14:02:56.084678 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 31 14:02:56.084686 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 31 14:02:56.084695 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 31 14:02:56.084704 kernel: Freeing SMP alternatives memory: 32K Oct 31 14:02:56.084712 kernel: pid_max: default: 32768 minimum: 301 Oct 31 14:02:56.084723 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 31 14:02:56.084731 kernel: landlock: Up and running. Oct 31 14:02:56.084740 kernel: SELinux: Initializing. Oct 31 14:02:56.084748 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 14:02:56.084757 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 31 14:02:56.084766 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 31 14:02:56.084774 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 31 14:02:56.084785 kernel: ... version: 0 Oct 31 14:02:56.084793 kernel: ... bit width: 48 Oct 31 14:02:56.084802 kernel: ... generic registers: 6 Oct 31 14:02:56.084810 kernel: ... value mask: 0000ffffffffffff Oct 31 14:02:56.084818 kernel: ... max period: 00007fffffffffff Oct 31 14:02:56.084827 kernel: ... fixed-purpose events: 0 Oct 31 14:02:56.084835 kernel: ... event mask: 000000000000003f Oct 31 14:02:56.084846 kernel: signal: max sigframe size: 1776 Oct 31 14:02:56.084882 kernel: rcu: Hierarchical SRCU implementation. Oct 31 14:02:56.084893 kernel: rcu: Max phase no-delay instances is 400. Oct 31 14:02:56.084906 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 31 14:02:56.084914 kernel: smp: Bringing up secondary CPUs ... Oct 31 14:02:56.084923 kernel: smpboot: x86: Booting SMP configuration: Oct 31 14:02:56.084931 kernel: .... node #0, CPUs: #1 #2 #3 Oct 31 14:02:56.084939 kernel: smp: Brought up 1 node, 4 CPUs Oct 31 14:02:56.084952 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 31 14:02:56.084962 kernel: Memory: 2441104K/2565800K available (14336K kernel code, 2443K rwdata, 29892K rodata, 15348K init, 2696K bss, 118764K reserved, 0K cma-reserved) Oct 31 14:02:56.084970 kernel: devtmpfs: initialized Oct 31 14:02:56.084979 kernel: x86/mm: Memory block size: 128MB Oct 31 14:02:56.084988 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 31 14:02:56.084996 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 31 14:02:56.085005 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 31 14:02:56.085016 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 31 14:02:56.085025 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 31 14:02:56.085034 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 31 14:02:56.085042 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 31 14:02:56.085051 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 31 14:02:56.085060 kernel: pinctrl core: initialized pinctrl subsystem Oct 31 14:02:56.085070 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 31 14:02:56.085079 kernel: audit: initializing netlink subsys (disabled) Oct 31 14:02:56.085087 kernel: audit: type=2000 audit(1761919371.275:1): state=initialized audit_enabled=0 res=1 Oct 31 14:02:56.085096 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 31 14:02:56.085104 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 31 14:02:56.085113 kernel: cpuidle: using governor menu Oct 31 14:02:56.085121 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 31 14:02:56.085130 kernel: dca service started, version 1.12.1 Oct 31 14:02:56.085140 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 31 14:02:56.085149 kernel: PCI: Using configuration type 1 for base access Oct 31 14:02:56.085158 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 31 14:02:56.085167 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 31 14:02:56.085175 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 31 14:02:56.085184 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 31 14:02:56.085192 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 31 14:02:56.085203 kernel: ACPI: Added _OSI(Module Device) Oct 31 14:02:56.085211 kernel: ACPI: Added _OSI(Processor Device) Oct 31 14:02:56.085220 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 31 14:02:56.085228 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 31 14:02:56.085237 kernel: ACPI: Interpreter enabled Oct 31 14:02:56.085246 kernel: ACPI: PM: (supports S0 S3 S5) Oct 31 14:02:56.085254 kernel: ACPI: Using IOAPIC for interrupt routing Oct 31 14:02:56.085275 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 31 14:02:56.085284 kernel: PCI: Using E820 reservations for host bridge windows Oct 31 14:02:56.085292 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 31 14:02:56.085301 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 31 14:02:56.085691 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 31 14:02:56.085912 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 31 14:02:56.086135 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 31 14:02:56.086151 kernel: PCI host bridge to bus 0000:00 Oct 31 14:02:56.086381 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 31 14:02:56.086582 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 31 14:02:56.086782 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 31 14:02:56.086998 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 31 14:02:56.087206 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 31 14:02:56.087430 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 31 14:02:56.087611 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 31 14:02:56.087837 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 31 14:02:56.088106 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 31 14:02:56.088299 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 31 14:02:56.088479 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 31 14:02:56.088651 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 31 14:02:56.088822 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 31 14:02:56.089057 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 31 14:02:56.089313 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 31 14:02:56.089565 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 31 14:02:56.089785 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 31 14:02:56.090004 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 31 14:02:56.090181 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 31 14:02:56.090366 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 31 14:02:56.090540 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 31 14:02:56.090730 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 31 14:02:56.090973 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 31 14:02:56.091189 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 31 14:02:56.091381 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 31 14:02:56.091555 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 31 14:02:56.091740 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 31 14:02:56.091940 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 31 14:02:56.092126 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 31 14:02:56.092313 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 31 14:02:56.092486 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 31 14:02:56.092667 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 31 14:02:56.092846 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 31 14:02:56.092880 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 31 14:02:56.092892 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 31 14:02:56.092904 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 31 14:02:56.092915 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 31 14:02:56.092923 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 31 14:02:56.092936 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 31 14:02:56.092945 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 31 14:02:56.092953 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 31 14:02:56.092962 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 31 14:02:56.092971 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 31 14:02:56.092979 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 31 14:02:56.092988 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 31 14:02:56.092998 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 31 14:02:56.093007 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 31 14:02:56.093015 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 31 14:02:56.093024 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 31 14:02:56.093032 kernel: iommu: Default domain type: Translated Oct 31 14:02:56.093041 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 31 14:02:56.093049 kernel: efivars: Registered efivars operations Oct 31 14:02:56.093060 kernel: PCI: Using ACPI for IRQ routing Oct 31 14:02:56.093069 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 31 14:02:56.093077 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 31 14:02:56.093086 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 31 14:02:56.093094 kernel: e820: reserve RAM buffer [mem 0x9b2e3018-0x9bffffff] Oct 31 14:02:56.093102 kernel: e820: reserve RAM buffer [mem 0x9b320018-0x9bffffff] Oct 31 14:02:56.093111 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 31 14:02:56.093119 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 31 14:02:56.093130 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 31 14:02:56.093138 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 31 14:02:56.093355 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 31 14:02:56.093537 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 31 14:02:56.093708 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 31 14:02:56.093719 kernel: vgaarb: loaded Oct 31 14:02:56.093732 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 31 14:02:56.093741 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 31 14:02:56.093749 kernel: clocksource: Switched to clocksource kvm-clock Oct 31 14:02:56.093758 kernel: VFS: Disk quotas dquot_6.6.0 Oct 31 14:02:56.093766 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 31 14:02:56.093775 kernel: pnp: PnP ACPI init Oct 31 14:02:56.094035 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 31 14:02:56.094064 kernel: pnp: PnP ACPI: found 6 devices Oct 31 14:02:56.094076 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 31 14:02:56.094088 kernel: NET: Registered PF_INET protocol family Oct 31 14:02:56.094100 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 31 14:02:56.094112 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 31 14:02:56.094124 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 31 14:02:56.094165 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 31 14:02:56.094176 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 31 14:02:56.094188 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 31 14:02:56.094200 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 14:02:56.094211 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 31 14:02:56.094223 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 31 14:02:56.094235 kernel: NET: Registered PF_XDP protocol family Oct 31 14:02:56.094480 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 31 14:02:56.095060 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 31 14:02:56.095293 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 31 14:02:56.095495 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 31 14:02:56.095758 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 31 14:02:56.095978 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 31 14:02:56.096144 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 31 14:02:56.096331 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 31 14:02:56.096347 kernel: PCI: CLS 0 bytes, default 64 Oct 31 14:02:56.096360 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 31 14:02:56.096377 kernel: Initialise system trusted keyrings Oct 31 14:02:56.096392 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 31 14:02:56.096403 kernel: Key type asymmetric registered Oct 31 14:02:56.096415 kernel: Asymmetric key parser 'x509' registered Oct 31 14:02:56.096428 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 31 14:02:56.096440 kernel: io scheduler mq-deadline registered Oct 31 14:02:56.096453 kernel: io scheduler kyber registered Oct 31 14:02:56.096465 kernel: io scheduler bfq registered Oct 31 14:02:56.096482 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 31 14:02:56.096495 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 31 14:02:56.096508 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 31 14:02:56.096519 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 31 14:02:56.096531 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 31 14:02:56.096544 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 31 14:02:56.096555 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 31 14:02:56.096570 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 31 14:02:56.096582 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 31 14:02:56.096817 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 31 14:02:56.096838 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 31 14:02:56.097074 kernel: rtc_cmos 00:04: registered as rtc0 Oct 31 14:02:56.097294 kernel: rtc_cmos 00:04: setting system clock to 2025-10-31T14:02:53 UTC (1761919373) Oct 31 14:02:56.097516 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 31 14:02:56.097533 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 31 14:02:56.097546 kernel: efifb: probing for efifb Oct 31 14:02:56.097558 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 31 14:02:56.097570 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 31 14:02:56.097582 kernel: efifb: scrolling: redraw Oct 31 14:02:56.097594 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 31 14:02:56.097611 kernel: Console: switching to colour frame buffer device 160x50 Oct 31 14:02:56.097623 kernel: fb0: EFI VGA frame buffer device Oct 31 14:02:56.097634 kernel: pstore: Using crash dump compression: deflate Oct 31 14:02:56.097646 kernel: pstore: Registered efi_pstore as persistent store backend Oct 31 14:02:56.097658 kernel: NET: Registered PF_INET6 protocol family Oct 31 14:02:56.097670 kernel: Segment Routing with IPv6 Oct 31 14:02:56.097681 kernel: In-situ OAM (IOAM) with IPv6 Oct 31 14:02:56.097693 kernel: NET: Registered PF_PACKET protocol family Oct 31 14:02:56.097707 kernel: Key type dns_resolver registered Oct 31 14:02:56.097719 kernel: IPI shorthand broadcast: enabled Oct 31 14:02:56.097731 kernel: sched_clock: Marking stable (2080003571, 291676491)->(2510499583, -138819521) Oct 31 14:02:56.097743 kernel: registered taskstats version 1 Oct 31 14:02:56.097754 kernel: Loading compiled-in X.509 certificates Oct 31 14:02:56.097766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: d5b1c22885a28a952e9fe2b5fe942003d6c5c8b4' Oct 31 14:02:56.097778 kernel: Demotion targets for Node 0: null Oct 31 14:02:56.097792 kernel: Key type .fscrypt registered Oct 31 14:02:56.097804 kernel: Key type fscrypt-provisioning registered Oct 31 14:02:56.097816 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 31 14:02:56.097827 kernel: ima: Allocated hash algorithm: sha1 Oct 31 14:02:56.097839 kernel: ima: No architecture policies found Oct 31 14:02:56.097868 kernel: clk: Disabling unused clocks Oct 31 14:02:56.097880 kernel: Freeing unused kernel image (initmem) memory: 15348K Oct 31 14:02:56.097895 kernel: Write protecting the kernel read-only data: 45056k Oct 31 14:02:56.097907 kernel: Freeing unused kernel image (rodata/data gap) memory: 828K Oct 31 14:02:56.097918 kernel: Run /init as init process Oct 31 14:02:56.097930 kernel: with arguments: Oct 31 14:02:56.097941 kernel: /init Oct 31 14:02:56.097953 kernel: with environment: Oct 31 14:02:56.097964 kernel: HOME=/ Oct 31 14:02:56.097976 kernel: TERM=linux Oct 31 14:02:56.097990 kernel: SCSI subsystem initialized Oct 31 14:02:56.098003 kernel: libata version 3.00 loaded. Oct 31 14:02:56.098228 kernel: ahci 0000:00:1f.2: version 3.0 Oct 31 14:02:56.098248 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 31 14:02:56.098490 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 31 14:02:56.098723 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 31 14:02:56.098983 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 31 14:02:56.099260 kernel: scsi host0: ahci Oct 31 14:02:56.099524 kernel: scsi host1: ahci Oct 31 14:02:56.099768 kernel: scsi host2: ahci Oct 31 14:02:56.100037 kernel: scsi host3: ahci Oct 31 14:02:56.100302 kernel: scsi host4: ahci Oct 31 14:02:56.100545 kernel: scsi host5: ahci Oct 31 14:02:56.100564 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 31 14:02:56.100577 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 31 14:02:56.100590 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 31 14:02:56.100607 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 31 14:02:56.100622 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 31 14:02:56.100634 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 31 14:02:56.100646 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 31 14:02:56.100659 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 31 14:02:56.100671 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 31 14:02:56.100683 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 31 14:02:56.100696 kernel: ata3.00: LPM support broken, forcing max_power Oct 31 14:02:56.100713 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 31 14:02:56.100725 kernel: ata3.00: applying bridge limits Oct 31 14:02:56.100738 kernel: ata3.00: LPM support broken, forcing max_power Oct 31 14:02:56.100749 kernel: ata3.00: configured for UDMA/100 Oct 31 14:02:56.101038 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 31 14:02:56.101058 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 31 14:02:56.101071 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 31 14:02:56.101476 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 31 14:02:56.101699 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 31 14:02:56.101717 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 31 14:02:56.101730 kernel: GPT:16515071 != 27000831 Oct 31 14:02:56.101742 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 31 14:02:56.101754 kernel: GPT:16515071 != 27000831 Oct 31 14:02:56.101765 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 31 14:02:56.101782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 31 14:02:56.102047 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 31 14:02:56.102067 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 31 14:02:56.102325 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 31 14:02:56.102345 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 31 14:02:56.102358 kernel: device-mapper: uevent: version 1.0.3 Oct 31 14:02:56.102375 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 31 14:02:56.102388 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 31 14:02:56.102399 kernel: raid6: avx2x4 gen() 20235 MB/s Oct 31 14:02:56.102410 kernel: raid6: avx2x2 gen() 30307 MB/s Oct 31 14:02:56.102422 kernel: raid6: avx2x1 gen() 25604 MB/s Oct 31 14:02:56.102434 kernel: raid6: using algorithm avx2x2 gen() 30307 MB/s Oct 31 14:02:56.102443 kernel: raid6: .... xor() 19916 MB/s, rmw enabled Oct 31 14:02:56.102455 kernel: raid6: using avx2x2 recovery algorithm Oct 31 14:02:56.102464 kernel: xor: automatically using best checksumming function avx Oct 31 14:02:56.102473 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 31 14:02:56.102482 kernel: BTRFS: device fsid 5e8ba8f1-db13-4075-a8cb-1b945120d0ee devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (181) Oct 31 14:02:56.102492 kernel: BTRFS info (device dm-0): first mount of filesystem 5e8ba8f1-db13-4075-a8cb-1b945120d0ee Oct 31 14:02:56.102501 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:02:56.102510 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 31 14:02:56.102521 kernel: BTRFS info (device dm-0): enabling free space tree Oct 31 14:02:56.102530 kernel: loop: module loaded Oct 31 14:02:56.102539 kernel: loop0: detected capacity change from 0 to 100128 Oct 31 14:02:56.102548 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 31 14:02:56.102562 systemd[1]: Successfully made /usr/ read-only. Oct 31 14:02:56.102578 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 14:02:56.102595 systemd[1]: Detected virtualization kvm. Oct 31 14:02:56.102608 systemd[1]: Detected architecture x86-64. Oct 31 14:02:56.102621 systemd[1]: Running in initrd. Oct 31 14:02:56.102634 systemd[1]: No hostname configured, using default hostname. Oct 31 14:02:56.102648 systemd[1]: Hostname set to . Oct 31 14:02:56.102661 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 14:02:56.102674 systemd[1]: Queued start job for default target initrd.target. Oct 31 14:02:56.102690 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 14:02:56.102703 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:02:56.102716 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:02:56.102731 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 31 14:02:56.102744 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 14:02:56.102758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 31 14:02:56.102775 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 31 14:02:56.102789 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:02:56.102802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:02:56.102816 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 31 14:02:56.102829 systemd[1]: Reached target paths.target - Path Units. Oct 31 14:02:56.102843 systemd[1]: Reached target slices.target - Slice Units. Oct 31 14:02:56.102882 systemd[1]: Reached target swap.target - Swaps. Oct 31 14:02:56.102896 systemd[1]: Reached target timers.target - Timer Units. Oct 31 14:02:56.102909 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 14:02:56.102923 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 14:02:56.102936 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 31 14:02:56.102950 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 31 14:02:56.102963 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:02:56.102979 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 14:02:56.102993 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:02:56.103006 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 14:02:56.103020 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 31 14:02:56.103033 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 31 14:02:56.103047 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 14:02:56.103060 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 31 14:02:56.103077 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 31 14:02:56.103091 systemd[1]: Starting systemd-fsck-usr.service... Oct 31 14:02:56.103104 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 14:02:56.103118 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 14:02:56.103131 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:02:56.103180 systemd-journald[314]: Collecting audit messages is disabled. Oct 31 14:02:56.103213 kernel: clocksource: Long readout interval, skipping watchdog check: cs_nsec: 1008165570 wd_nsec: 1008165170 Oct 31 14:02:56.103226 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 31 14:02:56.103240 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:02:56.103254 systemd[1]: Finished systemd-fsck-usr.service. Oct 31 14:02:56.103277 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 14:02:56.103290 systemd-journald[314]: Journal started Oct 31 14:02:56.103319 systemd-journald[314]: Runtime Journal (/run/log/journal/80e954db8fd04a0f86c0cf4f4873f900) is 6M, max 48.1M, 42M free. Oct 31 14:02:56.106185 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 14:02:56.111605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 14:02:56.132500 systemd-tmpfiles[331]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 31 14:02:56.135975 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:02:56.141081 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 14:02:56.148544 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:02:56.155933 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 31 14:02:56.159666 systemd-modules-load[317]: Inserted module 'br_netfilter' Oct 31 14:02:56.161256 kernel: Bridge firewalling registered Oct 31 14:02:56.161650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:02:56.162660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 14:02:56.168315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 31 14:02:56.173751 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 14:02:56.188467 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:02:56.194934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:02:56.198413 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 14:02:56.213048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 14:02:56.218993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 31 14:02:56.265149 dracut-cmdline[363]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e4f6395c1f11b5d1e07a15155afadb91de20f1aac1cd9cff8fc1baca215a11a Oct 31 14:02:56.280408 systemd-resolved[354]: Positive Trust Anchors: Oct 31 14:02:56.280437 systemd-resolved[354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 14:02:56.280441 systemd-resolved[354]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 14:02:56.280472 systemd-resolved[354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 14:02:56.323755 systemd-resolved[354]: Defaulting to hostname 'linux'. Oct 31 14:02:56.326690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 14:02:56.329039 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:02:56.752900 kernel: Loading iSCSI transport class v2.0-870. Oct 31 14:02:56.767883 kernel: iscsi: registered transport (tcp) Oct 31 14:02:56.794289 kernel: iscsi: registered transport (qla4xxx) Oct 31 14:02:56.794338 kernel: QLogic iSCSI HBA Driver Oct 31 14:02:56.823325 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 14:02:56.847635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:02:56.854435 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 14:02:56.916203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 31 14:02:56.919804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 31 14:02:56.922830 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 31 14:02:56.971196 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 31 14:02:56.977193 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:02:57.028878 systemd-udevd[600]: Using default interface naming scheme 'v257'. Oct 31 14:02:57.042309 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:02:57.044952 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 31 14:02:57.090322 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 14:02:57.093100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 14:02:57.098432 dracut-pre-trigger[678]: rd.md=0: removing MD RAID activation Oct 31 14:02:57.533979 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 14:02:57.536584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 14:02:57.555767 systemd-networkd[712]: lo: Link UP Oct 31 14:02:57.555776 systemd-networkd[712]: lo: Gained carrier Oct 31 14:02:57.558495 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 14:02:57.560374 systemd[1]: Reached target network.target - Network. Oct 31 14:02:57.636010 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:02:57.639334 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 31 14:02:57.691252 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 31 14:02:57.715813 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 31 14:02:57.736903 kernel: cryptd: max_cpu_qlen set to 1000 Oct 31 14:02:57.737362 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 14:02:57.745879 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 31 14:02:57.754459 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 31 14:02:57.782016 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 31 14:02:57.790658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:02:57.792417 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:02:57.796764 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:02:57.806871 kernel: AES CTR mode by8 optimization enabled Oct 31 14:02:57.819231 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:02:57.832164 disk-uuid[828]: Primary Header is updated. Oct 31 14:02:57.832164 disk-uuid[828]: Secondary Entries is updated. Oct 31 14:02:57.832164 disk-uuid[828]: Secondary Header is updated. Oct 31 14:02:57.836158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:02:57.836328 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:02:57.839314 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:02:57.847091 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 14:02:57.847761 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 14:02:57.849385 systemd-networkd[712]: eth0: Link UP Oct 31 14:02:57.849693 systemd-networkd[712]: eth0: Gained carrier Oct 31 14:02:57.849705 systemd-networkd[712]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 14:02:57.872910 systemd-networkd[712]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 14:02:57.882601 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:02:57.941297 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 31 14:02:57.943363 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 14:02:57.945416 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:02:57.949491 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 14:02:57.954617 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 31 14:02:57.992459 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 31 14:02:58.892974 disk-uuid[840]: Warning: The kernel is still using the old partition table. Oct 31 14:02:58.892974 disk-uuid[840]: The new table will be used at the next reboot or after you Oct 31 14:02:58.892974 disk-uuid[840]: run partprobe(8) or kpartx(8) Oct 31 14:02:58.892974 disk-uuid[840]: The operation has completed successfully. Oct 31 14:02:58.907557 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 31 14:02:58.907732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 31 14:02:58.913332 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 31 14:02:58.953078 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (870) Oct 31 14:02:58.953142 kernel: BTRFS info (device vda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:02:58.953172 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:02:58.958060 kernel: BTRFS info (device vda6): turning on async discard Oct 31 14:02:58.958089 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 14:02:58.965876 kernel: BTRFS info (device vda6): last unmount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:02:58.966424 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 31 14:02:58.970668 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 31 14:02:59.340219 systemd-networkd[712]: eth0: Gained IPv6LL Oct 31 14:02:59.344047 ignition[889]: Ignition 2.22.0 Oct 31 14:02:59.344065 ignition[889]: Stage: fetch-offline Oct 31 14:02:59.344137 ignition[889]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:02:59.344155 ignition[889]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:02:59.345324 ignition[889]: parsed url from cmdline: "" Oct 31 14:02:59.345330 ignition[889]: no config URL provided Oct 31 14:02:59.345337 ignition[889]: reading system config file "/usr/lib/ignition/user.ign" Oct 31 14:02:59.345354 ignition[889]: no config at "/usr/lib/ignition/user.ign" Oct 31 14:02:59.346516 ignition[889]: op(1): [started] loading QEMU firmware config module Oct 31 14:02:59.346524 ignition[889]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 31 14:02:59.363874 ignition[889]: op(1): [finished] loading QEMU firmware config module Oct 31 14:02:59.442455 ignition[889]: parsing config with SHA512: 964482d700f76f2928e7d2102ddf452e6146f34aaac0991962f65c7af1427696052c1e534e6448c7f20bc3a63a08639414442da01d8ec46da21bc73a261248d4 Oct 31 14:02:59.450062 unknown[889]: fetched base config from "system" Oct 31 14:02:59.450076 unknown[889]: fetched user config from "qemu" Oct 31 14:02:59.450571 ignition[889]: fetch-offline: fetch-offline passed Oct 31 14:02:59.453796 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 14:02:59.450668 ignition[889]: Ignition finished successfully Oct 31 14:02:59.455471 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 31 14:02:59.456452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 31 14:02:59.488842 ignition[899]: Ignition 2.22.0 Oct 31 14:02:59.488870 ignition[899]: Stage: kargs Oct 31 14:02:59.489005 ignition[899]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:02:59.489015 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:02:59.489835 ignition[899]: kargs: kargs passed Oct 31 14:02:59.489897 ignition[899]: Ignition finished successfully Oct 31 14:02:59.496305 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 31 14:02:59.498340 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 31 14:02:59.534400 ignition[907]: Ignition 2.22.0 Oct 31 14:02:59.534414 ignition[907]: Stage: disks Oct 31 14:02:59.534604 ignition[907]: no configs at "/usr/lib/ignition/base.d" Oct 31 14:02:59.534616 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:02:59.535884 ignition[907]: disks: disks passed Oct 31 14:02:59.535940 ignition[907]: Ignition finished successfully Oct 31 14:02:59.541294 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 31 14:02:59.543542 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 31 14:02:59.546501 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 31 14:02:59.549749 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 14:02:59.553844 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 14:02:59.557338 systemd[1]: Reached target basic.target - Basic System. Oct 31 14:02:59.561396 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 31 14:02:59.729661 systemd-fsck[917]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 31 14:02:59.794975 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 31 14:02:59.797658 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 31 14:02:59.923879 kernel: EXT4-fs (vda9): mounted filesystem cbeebc11-9f40-4f51-91db-fa53497e9ba3 r/w with ordered data mode. Quota mode: none. Oct 31 14:02:59.924094 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 31 14:02:59.926209 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 31 14:02:59.930261 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 14:02:59.932930 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 31 14:02:59.934937 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 31 14:02:59.934973 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 31 14:02:59.935000 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 14:02:59.950532 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 31 14:02:59.952879 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 31 14:02:59.958124 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (925) Oct 31 14:02:59.958148 kernel: BTRFS info (device vda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:02:59.960876 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:02:59.962062 kernel: BTRFS info (device vda6): turning on async discard Oct 31 14:02:59.964317 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 14:02:59.965808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 14:03:00.015044 initrd-setup-root[949]: cut: /sysroot/etc/passwd: No such file or directory Oct 31 14:03:00.020450 initrd-setup-root[956]: cut: /sysroot/etc/group: No such file or directory Oct 31 14:03:00.026160 initrd-setup-root[963]: cut: /sysroot/etc/shadow: No such file or directory Oct 31 14:03:00.030818 initrd-setup-root[970]: cut: /sysroot/etc/gshadow: No such file or directory Oct 31 14:03:00.155349 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 31 14:03:00.157816 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 31 14:03:00.161988 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 31 14:03:00.181681 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 31 14:03:00.184056 kernel: BTRFS info (device vda6): last unmount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:03:00.195962 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 31 14:03:00.233671 ignition[1038]: INFO : Ignition 2.22.0 Oct 31 14:03:00.233671 ignition[1038]: INFO : Stage: mount Oct 31 14:03:00.236577 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:03:00.236577 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:03:00.236577 ignition[1038]: INFO : mount: mount passed Oct 31 14:03:00.236577 ignition[1038]: INFO : Ignition finished successfully Oct 31 14:03:00.238934 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 31 14:03:00.242959 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 31 14:03:00.273653 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 31 14:03:00.311874 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1051) Oct 31 14:03:00.315625 kernel: BTRFS info (device vda6): first mount of filesystem dd2b9397-9351-49e9-bd32-bf3668fba946 Oct 31 14:03:00.315649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 31 14:03:00.319723 kernel: BTRFS info (device vda6): turning on async discard Oct 31 14:03:00.319745 kernel: BTRFS info (device vda6): enabling free space tree Oct 31 14:03:00.322452 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 31 14:03:00.368427 ignition[1068]: INFO : Ignition 2.22.0 Oct 31 14:03:00.368427 ignition[1068]: INFO : Stage: files Oct 31 14:03:00.371042 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:03:00.371042 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:03:00.371042 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Oct 31 14:03:00.376996 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 31 14:03:00.376996 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 31 14:03:00.384414 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 31 14:03:00.386846 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 31 14:03:00.389424 unknown[1068]: wrote ssh authorized keys file for user: core Oct 31 14:03:00.391230 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 31 14:03:00.391230 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 14:03:00.391230 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 31 14:03:00.438867 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 31 14:03:00.520209 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 31 14:03:00.520209 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 31 14:03:00.526647 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 14:03:00.555490 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 14:03:00.555490 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 14:03:00.555490 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 31 14:03:01.000383 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 31 14:03:01.732387 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 31 14:03:01.732387 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 31 14:03:01.740791 ignition[1068]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 31 14:03:01.776682 ignition[1068]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 14:03:01.784469 ignition[1068]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 31 14:03:01.787587 ignition[1068]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 31 14:03:01.787587 ignition[1068]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 31 14:03:01.792648 ignition[1068]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 31 14:03:01.792648 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 31 14:03:01.792648 ignition[1068]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 31 14:03:01.792648 ignition[1068]: INFO : files: files passed Oct 31 14:03:01.792648 ignition[1068]: INFO : Ignition finished successfully Oct 31 14:03:01.800876 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 31 14:03:01.805704 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 31 14:03:01.807561 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 31 14:03:01.830988 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 31 14:03:01.831167 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 31 14:03:01.839054 initrd-setup-root-after-ignition[1099]: grep: /sysroot/oem/oem-release: No such file or directory Oct 31 14:03:01.844418 initrd-setup-root-after-ignition[1101]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:03:01.844418 initrd-setup-root-after-ignition[1101]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:03:01.849429 initrd-setup-root-after-ignition[1105]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 31 14:03:01.853420 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 14:03:01.855615 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 31 14:03:01.860989 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 31 14:03:01.949875 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 31 14:03:01.950070 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 31 14:03:01.951447 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 31 14:03:01.955902 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 31 14:03:01.959819 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 31 14:03:01.963989 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 31 14:03:02.004953 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 14:03:02.007427 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 31 14:03:02.035656 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 31 14:03:02.035920 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:03:02.036893 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:03:02.042726 systemd[1]: Stopped target timers.target - Timer Units. Oct 31 14:03:02.046744 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 31 14:03:02.046903 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 31 14:03:02.053019 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 31 14:03:02.053941 systemd[1]: Stopped target basic.target - Basic System. Oct 31 14:03:02.058798 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 31 14:03:02.061628 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 31 14:03:02.065393 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 31 14:03:02.066362 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 31 14:03:02.075606 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 31 14:03:02.076352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 31 14:03:02.079431 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 31 14:03:02.083483 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 31 14:03:02.086273 systemd[1]: Stopped target swap.target - Swaps. Oct 31 14:03:02.089276 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 31 14:03:02.089411 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 31 14:03:02.094258 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:03:02.095400 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:03:02.099589 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 31 14:03:02.099745 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:03:02.103496 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 31 14:03:02.103651 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 31 14:03:02.109709 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 31 14:03:02.109835 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 31 14:03:02.110631 systemd[1]: Stopped target paths.target - Path Units. Oct 31 14:03:02.115351 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 31 14:03:02.120961 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:03:02.124098 systemd[1]: Stopped target slices.target - Slice Units. Oct 31 14:03:02.125311 systemd[1]: Stopped target sockets.target - Socket Units. Oct 31 14:03:02.131902 systemd[1]: iscsid.socket: Deactivated successfully. Oct 31 14:03:02.132057 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 31 14:03:02.133009 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 31 14:03:02.133134 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 31 14:03:02.136937 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 31 14:03:02.137108 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 31 14:03:02.139803 systemd[1]: ignition-files.service: Deactivated successfully. Oct 31 14:03:02.139988 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 31 14:03:02.146096 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 31 14:03:02.147794 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 31 14:03:02.151263 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 31 14:03:02.151508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:03:02.161828 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 31 14:03:02.163566 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:03:02.167307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 31 14:03:02.168928 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 31 14:03:02.179090 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 31 14:03:02.179234 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 31 14:03:02.208659 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 31 14:03:02.216112 ignition[1125]: INFO : Ignition 2.22.0 Oct 31 14:03:02.216112 ignition[1125]: INFO : Stage: umount Oct 31 14:03:02.219179 ignition[1125]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 31 14:03:02.219179 ignition[1125]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 31 14:03:02.219179 ignition[1125]: INFO : umount: umount passed Oct 31 14:03:02.219179 ignition[1125]: INFO : Ignition finished successfully Oct 31 14:03:02.224478 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 31 14:03:02.224626 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 31 14:03:02.228456 systemd[1]: Stopped target network.target - Network. Oct 31 14:03:02.229457 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 31 14:03:02.229527 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 31 14:03:02.230588 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 31 14:03:02.230646 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 31 14:03:02.231395 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 31 14:03:02.231450 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 31 14:03:02.238866 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 31 14:03:02.238926 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 31 14:03:02.239803 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 31 14:03:02.240391 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 31 14:03:02.262437 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 31 14:03:02.262627 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 31 14:03:02.268942 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 31 14:03:02.269099 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 31 14:03:02.276688 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 31 14:03:02.277501 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 31 14:03:02.277591 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:03:02.279630 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 31 14:03:02.280206 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 31 14:03:02.280302 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 31 14:03:02.280916 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 31 14:03:02.281008 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:03:02.281486 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 31 14:03:02.281571 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 31 14:03:02.282427 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:03:02.284259 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 31 14:03:02.293227 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 31 14:03:02.294398 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 31 14:03:02.294518 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 31 14:03:02.307173 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 31 14:03:02.307398 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:03:02.309626 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 31 14:03:02.309686 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 31 14:03:02.312762 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 31 14:03:02.312805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:03:02.316353 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 31 14:03:02.316429 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 31 14:03:02.321959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 31 14:03:02.322039 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 31 14:03:02.326135 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 31 14:03:02.326216 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 31 14:03:02.332377 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 31 14:03:02.336031 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 31 14:03:02.336106 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:03:02.337216 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 31 14:03:02.337287 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:03:02.344250 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Oct 31 14:03:02.344324 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:03:02.345381 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 31 14:03:02.345442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:03:02.349686 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:03:02.349764 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:03:02.357724 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 31 14:03:02.357945 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 31 14:03:02.404097 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 31 14:03:02.404341 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 31 14:03:02.407795 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 31 14:03:02.409837 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 31 14:03:02.440260 systemd[1]: Switching root. Oct 31 14:03:02.483931 systemd-journald[314]: Journal stopped Oct 31 14:03:03.860966 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Oct 31 14:03:03.861198 kernel: SELinux: policy capability network_peer_controls=1 Oct 31 14:03:03.861225 kernel: SELinux: policy capability open_perms=1 Oct 31 14:03:03.861268 kernel: SELinux: policy capability extended_socket_class=1 Oct 31 14:03:03.861323 kernel: SELinux: policy capability always_check_network=0 Oct 31 14:03:03.861341 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 31 14:03:03.861356 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 31 14:03:03.861371 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 31 14:03:03.861404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 31 14:03:03.861426 kernel: SELinux: policy capability userspace_initial_context=0 Oct 31 14:03:03.861452 kernel: audit: type=1403 audit(1761919382.930:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 31 14:03:03.861476 systemd[1]: Successfully loaded SELinux policy in 74.433ms. Oct 31 14:03:03.861511 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.081ms. Oct 31 14:03:03.861531 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 31 14:03:03.861554 systemd[1]: Detected virtualization kvm. Oct 31 14:03:03.861584 systemd[1]: Detected architecture x86-64. Oct 31 14:03:03.861609 systemd[1]: Detected first boot. Oct 31 14:03:03.861631 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 31 14:03:03.861651 zram_generator::config[1171]: No configuration found. Oct 31 14:03:03.861681 kernel: Guest personality initialized and is inactive Oct 31 14:03:03.861702 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 31 14:03:03.861728 kernel: Initialized host personality Oct 31 14:03:03.861772 kernel: NET: Registered PF_VSOCK protocol family Oct 31 14:03:03.861792 systemd[1]: Populated /etc with preset unit settings. Oct 31 14:03:03.861809 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 31 14:03:03.861826 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 31 14:03:03.861842 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 31 14:03:03.861880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 31 14:03:03.861909 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 31 14:03:03.861958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 31 14:03:03.861977 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 31 14:03:03.861993 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 31 14:03:03.862010 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 31 14:03:03.862039 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 31 14:03:03.862062 systemd[1]: Created slice user.slice - User and Session Slice. Oct 31 14:03:03.862090 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 31 14:03:03.862119 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 31 14:03:03.862137 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 31 14:03:03.862153 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 31 14:03:03.862169 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 31 14:03:03.862186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 31 14:03:03.862208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 31 14:03:03.862251 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 31 14:03:03.862285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 31 14:03:03.862304 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 31 14:03:03.862319 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 31 14:03:03.862336 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 31 14:03:03.862369 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 31 14:03:03.862401 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 31 14:03:03.862449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 31 14:03:03.862468 systemd[1]: Reached target slices.target - Slice Units. Oct 31 14:03:03.862484 systemd[1]: Reached target swap.target - Swaps. Oct 31 14:03:03.862513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 31 14:03:03.862538 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 31 14:03:03.862574 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 31 14:03:03.862592 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 31 14:03:03.862632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 31 14:03:03.862660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 31 14:03:03.862678 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 31 14:03:03.862694 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 31 14:03:03.862712 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 31 14:03:03.862728 systemd[1]: Mounting media.mount - External Media Directory... Oct 31 14:03:03.862753 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:03.862778 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 31 14:03:03.862817 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 31 14:03:03.862839 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 31 14:03:03.862894 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 31 14:03:03.862914 systemd[1]: Reached target machines.target - Containers. Oct 31 14:03:03.862934 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 31 14:03:03.862962 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 14:03:03.862996 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 31 14:03:03.863012 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 31 14:03:03.863029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:03:03.863045 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 14:03:03.863070 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 14:03:03.863111 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 31 14:03:03.863137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 14:03:03.863164 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 31 14:03:03.863181 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 31 14:03:03.863197 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 31 14:03:03.863219 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 31 14:03:03.863234 systemd[1]: Stopped systemd-fsck-usr.service. Oct 31 14:03:03.863251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:03:03.863317 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 31 14:03:03.863354 kernel: fuse: init (API version 7.41) Oct 31 14:03:03.863388 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 31 14:03:03.863444 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 31 14:03:03.863481 kernel: ACPI: bus type drm_connector registered Oct 31 14:03:03.863523 systemd-journald[1253]: Collecting audit messages is disabled. Oct 31 14:03:03.863593 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 31 14:03:03.863712 systemd-journald[1253]: Journal started Oct 31 14:03:03.863741 systemd-journald[1253]: Runtime Journal (/run/log/journal/80e954db8fd04a0f86c0cf4f4873f900) is 6M, max 48.1M, 42M free. Oct 31 14:03:03.523984 systemd[1]: Queued start job for default target multi-user.target. Oct 31 14:03:03.545064 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 31 14:03:03.545611 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 31 14:03:03.546025 systemd[1]: systemd-journald.service: Consumed 1.472s CPU time. Oct 31 14:03:03.868120 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 31 14:03:03.875903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 31 14:03:03.875954 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:03.880898 systemd[1]: Started systemd-journald.service - Journal Service. Oct 31 14:03:03.884945 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 31 14:03:03.887042 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 31 14:03:03.889120 systemd[1]: Mounted media.mount - External Media Directory. Oct 31 14:03:03.891176 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 31 14:03:03.893317 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 31 14:03:03.896264 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 31 14:03:03.898362 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 31 14:03:03.901074 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 31 14:03:03.903735 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 31 14:03:03.904196 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 31 14:03:03.906563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:03:03.907016 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:03:03.909779 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 14:03:03.910204 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 14:03:03.912431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 14:03:03.912670 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 14:03:03.915143 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 31 14:03:03.915411 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 31 14:03:03.917673 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 14:03:03.917908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 14:03:03.920235 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 31 14:03:03.922745 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 31 14:03:03.926059 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 31 14:03:03.928548 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 31 14:03:03.947023 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 31 14:03:03.949327 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 31 14:03:03.952636 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 31 14:03:03.955493 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 31 14:03:03.957382 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 31 14:03:03.957411 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 31 14:03:03.959993 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 31 14:03:03.962175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:03:03.968543 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 31 14:03:03.971651 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 31 14:03:03.973708 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 14:03:03.975052 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 31 14:03:03.976976 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 14:03:03.980068 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 31 14:03:03.985270 systemd-journald[1253]: Time spent on flushing to /var/log/journal/80e954db8fd04a0f86c0cf4f4873f900 is 21.159ms for 1054 entries. Oct 31 14:03:03.985270 systemd-journald[1253]: System Journal (/var/log/journal/80e954db8fd04a0f86c0cf4f4873f900) is 8M, max 163.5M, 155.5M free. Oct 31 14:03:04.021116 systemd-journald[1253]: Received client request to flush runtime journal. Oct 31 14:03:04.021172 kernel: loop1: detected capacity change from 0 to 111544 Oct 31 14:03:03.982942 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 31 14:03:03.985102 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 31 14:03:03.990190 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 31 14:03:03.993969 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 31 14:03:03.996401 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 31 14:03:04.004581 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 31 14:03:04.007446 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 31 14:03:04.013683 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 31 14:03:04.027163 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 31 14:03:04.031838 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Oct 31 14:03:04.031877 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Oct 31 14:03:04.033260 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 31 14:03:04.037730 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 31 14:03:04.043012 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 31 14:03:04.056070 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 31 14:03:04.061880 kernel: loop2: detected capacity change from 0 to 128912 Oct 31 14:03:04.086884 kernel: loop3: detected capacity change from 0 to 219144 Oct 31 14:03:04.086873 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 31 14:03:04.091612 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 31 14:03:04.094599 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 31 14:03:04.109800 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 31 14:03:04.115925 kernel: loop4: detected capacity change from 0 to 111544 Oct 31 14:03:04.122953 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Oct 31 14:03:04.123364 systemd-tmpfiles[1312]: ACLs are not supported, ignoring. Oct 31 14:03:04.126883 kernel: loop5: detected capacity change from 0 to 128912 Oct 31 14:03:04.129583 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 31 14:03:04.140888 kernel: loop6: detected capacity change from 0 to 219144 Oct 31 14:03:04.149996 (sd-merge)[1315]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 31 14:03:04.154708 (sd-merge)[1315]: Merged extensions into '/usr'. Oct 31 14:03:04.160489 systemd[1]: Reload requested from client PID 1290 ('systemd-sysext') (unit systemd-sysext.service)... Oct 31 14:03:04.160520 systemd[1]: Reloading... Oct 31 14:03:04.248165 zram_generator::config[1348]: No configuration found. Oct 31 14:03:04.306464 systemd-resolved[1310]: Positive Trust Anchors: Oct 31 14:03:04.306484 systemd-resolved[1310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 31 14:03:04.306489 systemd-resolved[1310]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 31 14:03:04.306520 systemd-resolved[1310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 31 14:03:04.310776 systemd-resolved[1310]: Defaulting to hostname 'linux'. Oct 31 14:03:04.458362 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 31 14:03:04.458565 systemd[1]: Reloading finished in 297 ms. Oct 31 14:03:04.497463 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 31 14:03:04.499783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 31 14:03:04.502813 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 31 14:03:04.507687 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 31 14:03:04.526606 systemd[1]: Starting ensure-sysext.service... Oct 31 14:03:04.530237 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 31 14:03:04.551621 systemd[1]: Reload requested from client PID 1385 ('systemctl') (unit ensure-sysext.service)... Oct 31 14:03:04.551640 systemd[1]: Reloading... Oct 31 14:03:04.599687 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 31 14:03:04.599869 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 31 14:03:04.600214 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 31 14:03:04.600512 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 31 14:03:04.601476 systemd-tmpfiles[1386]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 31 14:03:04.601754 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 31 14:03:04.601831 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Oct 31 14:03:04.608303 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 14:03:04.608312 systemd-tmpfiles[1386]: Skipping /boot Oct 31 14:03:04.619586 systemd-tmpfiles[1386]: Detected autofs mount point /boot during canonicalization of boot. Oct 31 14:03:04.619603 systemd-tmpfiles[1386]: Skipping /boot Oct 31 14:03:04.663953 zram_generator::config[1416]: No configuration found. Oct 31 14:03:04.849164 systemd[1]: Reloading finished in 297 ms. Oct 31 14:03:04.876060 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 31 14:03:04.908045 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 31 14:03:04.919113 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 14:03:04.921745 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 31 14:03:04.932165 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 31 14:03:04.937070 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 31 14:03:04.941029 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 31 14:03:04.945461 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 31 14:03:04.949561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:04.949877 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 14:03:04.951562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:03:04.959185 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 14:03:04.966185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 14:03:04.968287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:03:04.968405 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:03:04.968502 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:04.969725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:03:04.970579 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:03:04.984097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 14:03:04.984347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 14:03:04.987011 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 14:03:04.987256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 14:03:04.994684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:04.995123 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 14:03:04.997094 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:03:04.999013 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:03:04.999309 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:03:04.999548 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 14:03:04.999820 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:05.001146 systemd-udevd[1460]: Using default interface naming scheme 'v257'. Oct 31 14:03:05.001613 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 31 14:03:05.008371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:03:05.008625 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:03:05.011386 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 31 14:03:05.019792 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:05.020986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 31 14:03:05.022304 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 31 14:03:05.025684 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 31 14:03:05.029766 augenrules[1491]: No rules Oct 31 14:03:05.029148 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 31 14:03:05.039220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 31 14:03:05.041191 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 31 14:03:05.041240 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 31 14:03:05.041317 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 31 14:03:05.042357 systemd[1]: Finished ensure-sysext.service. Oct 31 14:03:05.044381 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 14:03:05.044737 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 14:03:05.048463 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 31 14:03:05.049188 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 31 14:03:05.051518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 31 14:03:05.051721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 31 14:03:05.054030 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 31 14:03:05.057091 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 31 14:03:05.057324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 31 14:03:05.059603 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 31 14:03:05.060497 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 31 14:03:05.072967 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 31 14:03:05.089353 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 31 14:03:05.091202 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 31 14:03:05.091332 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 31 14:03:05.093097 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 31 14:03:05.095046 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 31 14:03:05.154527 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 31 14:03:05.238316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 31 14:03:05.244885 kernel: mousedev: PS/2 mouse device common for all mice Oct 31 14:03:05.245009 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 31 14:03:05.254877 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 31 14:03:05.265888 kernel: ACPI: button: Power Button [PWRF] Oct 31 14:03:05.291365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 31 14:03:05.308594 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 31 14:03:05.310816 systemd[1]: Reached target time-set.target - System Time Set. Oct 31 14:03:05.343656 systemd-networkd[1520]: lo: Link UP Oct 31 14:03:05.343669 systemd-networkd[1520]: lo: Gained carrier Oct 31 14:03:05.345973 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 31 14:03:05.347596 systemd-networkd[1520]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 14:03:05.347610 systemd-networkd[1520]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 31 14:03:05.347982 systemd[1]: Reached target network.target - Network. Oct 31 14:03:05.350484 systemd-networkd[1520]: eth0: Link UP Oct 31 14:03:05.350754 systemd-networkd[1520]: eth0: Gained carrier Oct 31 14:03:05.350768 systemd-networkd[1520]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 31 14:03:05.352726 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 31 14:03:05.366372 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 31 14:03:05.390392 systemd-networkd[1520]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 31 14:03:05.391305 systemd-timesyncd[1521]: Network configuration changed, trying to establish connection. Oct 31 14:03:05.393446 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 31 14:03:05.395902 systemd-timesyncd[1521]: Initial clock synchronization to Fri 2025-10-31 14:03:05.430322 UTC. Oct 31 14:03:05.401256 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 31 14:03:05.411270 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 31 14:03:05.411707 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 31 14:03:05.416228 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 31 14:03:05.418643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:03:05.487274 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 31 14:03:05.487624 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:03:05.502171 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 31 14:03:05.692350 kernel: kvm_amd: TSC scaling supported Oct 31 14:03:05.692423 kernel: kvm_amd: Nested Virtualization enabled Oct 31 14:03:05.692438 kernel: kvm_amd: Nested Paging enabled Oct 31 14:03:05.693350 kernel: kvm_amd: LBR virtualization supported Oct 31 14:03:05.695293 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 31 14:03:05.695322 kernel: kvm_amd: Virtual GIF supported Oct 31 14:03:05.731882 kernel: EDAC MC: Ver: 3.0.0 Oct 31 14:03:05.761574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 31 14:03:05.798635 ldconfig[1457]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 31 14:03:05.947931 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 31 14:03:05.953348 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 31 14:03:05.993904 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 31 14:03:05.996247 systemd[1]: Reached target sysinit.target - System Initialization. Oct 31 14:03:05.998214 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 31 14:03:06.000351 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 31 14:03:06.002549 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 31 14:03:06.004665 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 31 14:03:06.006746 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 31 14:03:06.008829 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 31 14:03:06.010930 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 31 14:03:06.010969 systemd[1]: Reached target paths.target - Path Units. Oct 31 14:03:06.012517 systemd[1]: Reached target timers.target - Timer Units. Oct 31 14:03:06.015251 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 31 14:03:06.018719 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 31 14:03:06.022631 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 31 14:03:06.024884 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 31 14:03:06.026967 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 31 14:03:06.031689 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 31 14:03:06.034331 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 31 14:03:06.038242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 31 14:03:06.041209 systemd[1]: Reached target sockets.target - Socket Units. Oct 31 14:03:06.042987 systemd[1]: Reached target basic.target - Basic System. Oct 31 14:03:06.044750 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 31 14:03:06.044786 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 31 14:03:06.045869 systemd[1]: Starting containerd.service - containerd container runtime... Oct 31 14:03:06.048974 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 31 14:03:06.051740 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 31 14:03:06.066003 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 31 14:03:06.069260 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 31 14:03:06.071012 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 31 14:03:06.072466 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 31 14:03:06.074574 jq[1582]: false Oct 31 14:03:06.076324 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 31 14:03:06.081914 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 31 14:03:06.085606 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 31 14:03:06.087101 extend-filesystems[1583]: Found /dev/vda6 Oct 31 14:03:06.089922 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing passwd entry cache Oct 31 14:03:06.089001 oslogin_cache_refresh[1584]: Refreshing passwd entry cache Oct 31 14:03:06.090315 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 31 14:03:06.095230 extend-filesystems[1583]: Found /dev/vda9 Oct 31 14:03:06.099969 extend-filesystems[1583]: Checking size of /dev/vda9 Oct 31 14:03:06.102791 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 31 14:03:06.104688 oslogin_cache_refresh[1584]: Failure getting users, quitting Oct 31 14:03:06.105005 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting users, quitting Oct 31 14:03:06.105005 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 14:03:06.105005 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Refreshing group entry cache Oct 31 14:03:06.104712 oslogin_cache_refresh[1584]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 31 14:03:06.104779 oslogin_cache_refresh[1584]: Refreshing group entry cache Oct 31 14:03:06.105292 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 31 14:03:06.106071 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 31 14:03:06.107023 systemd[1]: Starting update-engine.service - Update Engine... Oct 31 14:03:06.112752 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 31 14:03:06.117055 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Failure getting groups, quitting Oct 31 14:03:06.117055 google_oslogin_nss_cache[1584]: oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 14:03:06.117139 extend-filesystems[1583]: Resized partition /dev/vda9 Oct 31 14:03:06.114872 oslogin_cache_refresh[1584]: Failure getting groups, quitting Oct 31 14:03:06.118904 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Oct 31 14:03:06.114888 oslogin_cache_refresh[1584]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 31 14:03:06.118818 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 31 14:03:06.124874 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 31 14:03:06.120207 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 31 14:03:06.123503 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 31 14:03:06.123919 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 31 14:03:06.124163 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 31 14:03:06.127500 systemd[1]: motdgen.service: Deactivated successfully. Oct 31 14:03:06.127750 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 31 14:03:06.134363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 31 14:03:06.134631 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 31 14:03:06.221932 jq[1607]: true Oct 31 14:03:06.227455 update_engine[1602]: I20251031 14:03:06.227018 1602 main.cc:92] Flatcar Update Engine starting Oct 31 14:03:06.251878 tar[1611]: linux-amd64/LICENSE Oct 31 14:03:06.251878 tar[1611]: linux-amd64/helm Oct 31 14:03:06.261738 jq[1623]: true Oct 31 14:03:06.278870 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 31 14:03:06.281535 dbus-daemon[1580]: [system] SELinux support is enabled Oct 31 14:03:06.282001 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 31 14:03:06.311128 update_engine[1602]: I20251031 14:03:06.288519 1602 update_check_scheduler.cc:74] Next update check in 11m27s Oct 31 14:03:06.288523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 31 14:03:06.288549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 31 14:03:06.290972 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 31 14:03:06.290988 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 31 14:03:06.293670 systemd[1]: Started update-engine.service - Update Engine. Oct 31 14:03:06.298645 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 31 14:03:06.311945 systemd-logind[1600]: Watching system buttons on /dev/input/event2 (Power Button) Oct 31 14:03:06.312230 systemd-logind[1600]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 31 14:03:06.313600 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 31 14:03:06.313600 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 31 14:03:06.313600 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 31 14:03:06.353539 extend-filesystems[1583]: Resized filesystem in /dev/vda9 Oct 31 14:03:06.313736 systemd-logind[1600]: New seat seat0. Oct 31 14:03:06.355248 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 31 14:03:06.313908 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 31 14:03:06.319318 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 31 14:03:06.345144 systemd[1]: Started systemd-logind.service - User Login Management. Oct 31 14:03:06.368601 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 31 14:03:06.377158 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 31 14:03:06.380652 bash[1650]: Updated "/home/core/.ssh/authorized_keys" Oct 31 14:03:06.379320 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 31 14:03:06.383769 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 31 14:03:06.402594 systemd[1]: issuegen.service: Deactivated successfully. Oct 31 14:03:06.402902 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 31 14:03:06.405302 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 31 14:03:06.409387 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 31 14:03:06.434860 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 31 14:03:06.441507 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 31 14:03:06.445637 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 31 14:03:06.447680 systemd[1]: Reached target getty.target - Login Prompts. Oct 31 14:03:06.509094 systemd-networkd[1520]: eth0: Gained IPv6LL Oct 31 14:03:06.513922 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 31 14:03:06.517605 systemd[1]: Reached target network-online.target - Network is Online. Oct 31 14:03:06.523064 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 31 14:03:06.531137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:06.541160 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 31 14:03:06.597115 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 31 14:03:06.597609 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 31 14:03:06.601819 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 31 14:03:06.607249 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 31 14:03:06.672058 containerd[1613]: time="2025-10-31T14:03:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 31 14:03:06.679363 containerd[1613]: time="2025-10-31T14:03:06.678075257Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 31 14:03:06.703126 containerd[1613]: time="2025-10-31T14:03:06.702974717Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="121.007µs" Oct 31 14:03:06.703571 containerd[1613]: time="2025-10-31T14:03:06.703414459Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 31 14:03:06.703717 containerd[1613]: time="2025-10-31T14:03:06.703692490Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 31 14:03:06.704195 containerd[1613]: time="2025-10-31T14:03:06.704168049Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 31 14:03:06.704307 containerd[1613]: time="2025-10-31T14:03:06.704288013Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 31 14:03:06.704415 containerd[1613]: time="2025-10-31T14:03:06.704390894Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 14:03:06.704712 containerd[1613]: time="2025-10-31T14:03:06.704664823Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 31 14:03:06.704797 containerd[1613]: time="2025-10-31T14:03:06.704779329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 14:03:06.705786 containerd[1613]: time="2025-10-31T14:03:06.705738012Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 31 14:03:06.705945 containerd[1613]: time="2025-10-31T14:03:06.705917254Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 14:03:06.706057 containerd[1613]: time="2025-10-31T14:03:06.706022563Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 31 14:03:06.706149 containerd[1613]: time="2025-10-31T14:03:06.706118885Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 31 14:03:06.706654 containerd[1613]: time="2025-10-31T14:03:06.706607744Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 31 14:03:06.707318 containerd[1613]: time="2025-10-31T14:03:06.707277562Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 14:03:06.707457 containerd[1613]: time="2025-10-31T14:03:06.707434607Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 31 14:03:06.707558 containerd[1613]: time="2025-10-31T14:03:06.707524209Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 31 14:03:06.707936 containerd[1613]: time="2025-10-31T14:03:06.707751797Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 31 14:03:06.710315 containerd[1613]: time="2025-10-31T14:03:06.710230440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 31 14:03:06.710445 containerd[1613]: time="2025-10-31T14:03:06.710348267Z" level=info msg="metadata content store policy set" policy=shared Oct 31 14:03:06.720393 containerd[1613]: time="2025-10-31T14:03:06.720309359Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 31 14:03:06.720558 containerd[1613]: time="2025-10-31T14:03:06.720455271Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 31 14:03:06.720558 containerd[1613]: time="2025-10-31T14:03:06.720498301Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 31 14:03:06.720558 containerd[1613]: time="2025-10-31T14:03:06.720530720Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 31 14:03:06.720558 containerd[1613]: time="2025-10-31T14:03:06.720547772Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720579016Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720602958Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720631775Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720713363Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720747686Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720773424Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 31 14:03:06.720831 containerd[1613]: time="2025-10-31T14:03:06.720808822Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 31 14:03:06.721190 containerd[1613]: time="2025-10-31T14:03:06.721139612Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 31 14:03:06.721262 containerd[1613]: time="2025-10-31T14:03:06.721219886Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 31 14:03:06.721262 containerd[1613]: time="2025-10-31T14:03:06.721239415Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 31 14:03:06.721366 containerd[1613]: time="2025-10-31T14:03:06.721262956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 31 14:03:06.721366 containerd[1613]: time="2025-10-31T14:03:06.721299035Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 31 14:03:06.721435 containerd[1613]: time="2025-10-31T14:03:06.721369127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 31 14:03:06.721646 containerd[1613]: time="2025-10-31T14:03:06.721411877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 31 14:03:06.721694 containerd[1613]: time="2025-10-31T14:03:06.721643006Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 31 14:03:06.721694 containerd[1613]: time="2025-10-31T14:03:06.721659808Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 31 14:03:06.721694 containerd[1613]: time="2025-10-31T14:03:06.721672435Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 31 14:03:06.721789 containerd[1613]: time="2025-10-31T14:03:06.721703750Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 31 14:03:06.721963 containerd[1613]: time="2025-10-31T14:03:06.721907457Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 31 14:03:06.721963 containerd[1613]: time="2025-10-31T14:03:06.721949655Z" level=info msg="Start snapshots syncer" Oct 31 14:03:06.722275 containerd[1613]: time="2025-10-31T14:03:06.722232260Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 31 14:03:06.723005 containerd[1613]: time="2025-10-31T14:03:06.722936221Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 31 14:03:06.723318 containerd[1613]: time="2025-10-31T14:03:06.723081089Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 31 14:03:06.723513 containerd[1613]: time="2025-10-31T14:03:06.723381991Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 31 14:03:06.723951 containerd[1613]: time="2025-10-31T14:03:06.723910089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 31 14:03:06.724027 containerd[1613]: time="2025-10-31T14:03:06.723959619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 31 14:03:06.724027 containerd[1613]: time="2025-10-31T14:03:06.723987563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 31 14:03:06.724027 containerd[1613]: time="2025-10-31T14:03:06.724018758Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 31 14:03:06.724167 containerd[1613]: time="2025-10-31T14:03:06.724052952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 31 14:03:06.724167 containerd[1613]: time="2025-10-31T14:03:06.724081919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 31 14:03:06.724167 containerd[1613]: time="2025-10-31T14:03:06.724104839Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 31 14:03:06.724282 containerd[1613]: time="2025-10-31T14:03:06.724205403Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 31 14:03:06.724282 containerd[1613]: time="2025-10-31T14:03:06.724255414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 31 14:03:06.724282 containerd[1613]: time="2025-10-31T14:03:06.724274241Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 31 14:03:06.724427 containerd[1613]: time="2025-10-31T14:03:06.724329429Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 14:03:06.724427 containerd[1613]: time="2025-10-31T14:03:06.724368155Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 31 14:03:06.724427 containerd[1613]: time="2025-10-31T14:03:06.724392198Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 14:03:06.724529 containerd[1613]: time="2025-10-31T14:03:06.724422490Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 31 14:03:06.724529 containerd[1613]: time="2025-10-31T14:03:06.724453374Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 31 14:03:06.724529 containerd[1613]: time="2025-10-31T14:03:06.724481308Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 31 14:03:06.724529 containerd[1613]: time="2025-10-31T14:03:06.724496595Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 31 14:03:06.724637 containerd[1613]: time="2025-10-31T14:03:06.724533958Z" level=info msg="runtime interface created" Oct 31 14:03:06.724637 containerd[1613]: time="2025-10-31T14:03:06.724549435Z" level=info msg="created NRI interface" Oct 31 14:03:06.724637 containerd[1613]: time="2025-10-31T14:03:06.724571441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 31 14:03:06.724637 containerd[1613]: time="2025-10-31T14:03:06.724603267Z" level=info msg="Connect containerd service" Oct 31 14:03:06.724637 containerd[1613]: time="2025-10-31T14:03:06.724633629Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 31 14:03:06.726281 containerd[1613]: time="2025-10-31T14:03:06.726253392Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 31 14:03:07.005909 containerd[1613]: time="2025-10-31T14:03:07.005649476Z" level=info msg="Start subscribing containerd event" Oct 31 14:03:07.006756 containerd[1613]: time="2025-10-31T14:03:07.006435978Z" level=info msg="Start recovering state" Oct 31 14:03:07.010607 containerd[1613]: time="2025-10-31T14:03:07.010587957Z" level=info msg="Start event monitor" Oct 31 14:03:07.010690 containerd[1613]: time="2025-10-31T14:03:07.010678297Z" level=info msg="Start cni network conf syncer for default" Oct 31 14:03:07.010746 containerd[1613]: time="2025-10-31T14:03:07.010735096Z" level=info msg="Start streaming server" Oct 31 14:03:07.010819 containerd[1613]: time="2025-10-31T14:03:07.010808155Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 31 14:03:07.010891 containerd[1613]: time="2025-10-31T14:03:07.010879318Z" level=info msg="runtime interface starting up..." Oct 31 14:03:07.010938 containerd[1613]: time="2025-10-31T14:03:07.010928566Z" level=info msg="starting plugins..." Oct 31 14:03:07.011005 containerd[1613]: time="2025-10-31T14:03:07.010986448Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 31 14:03:07.011910 containerd[1613]: time="2025-10-31T14:03:07.011876030Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 31 14:03:07.012273 containerd[1613]: time="2025-10-31T14:03:07.012255044Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 31 14:03:07.015626 systemd[1]: Started containerd.service - containerd container runtime. Oct 31 14:03:07.020198 containerd[1613]: time="2025-10-31T14:03:07.018658225Z" level=info msg="containerd successfully booted in 0.347465s" Oct 31 14:03:07.067242 tar[1611]: linux-amd64/README.md Oct 31 14:03:07.096707 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 31 14:03:07.987483 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:07.990089 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 31 14:03:07.992204 systemd[1]: Startup finished in 3.334s (kernel) + 8.068s (initrd) + 5.133s (userspace) = 16.536s. Oct 31 14:03:08.009201 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:03:08.456954 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 31 14:03:08.458572 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:41724.service - OpenSSH per-connection server daemon (10.0.0.1:41724). Oct 31 14:03:08.578730 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 41724 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:08.581145 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:08.589387 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 31 14:03:08.591102 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 31 14:03:08.601427 systemd-logind[1600]: New session 1 of user core. Oct 31 14:03:08.735762 kubelet[1724]: E1031 14:03:08.735605 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:03:08.738138 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 31 14:03:08.741633 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 31 14:03:08.741878 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:03:08.742088 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:03:08.742420 systemd[1]: kubelet.service: Consumed 1.976s CPU time, 256.9M memory peak. Oct 31 14:03:08.764245 (systemd)[1741]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 31 14:03:08.766788 systemd-logind[1600]: New session c1 of user core. Oct 31 14:03:08.932319 systemd[1741]: Queued start job for default target default.target. Oct 31 14:03:08.951166 systemd[1741]: Created slice app.slice - User Application Slice. Oct 31 14:03:08.951197 systemd[1741]: Reached target paths.target - Paths. Oct 31 14:03:08.951286 systemd[1741]: Reached target timers.target - Timers. Oct 31 14:03:08.953250 systemd[1741]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 31 14:03:08.969730 systemd[1741]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 31 14:03:08.969882 systemd[1741]: Reached target sockets.target - Sockets. Oct 31 14:03:08.969921 systemd[1741]: Reached target basic.target - Basic System. Oct 31 14:03:08.969963 systemd[1741]: Reached target default.target - Main User Target. Oct 31 14:03:08.969996 systemd[1741]: Startup finished in 195ms. Oct 31 14:03:08.970560 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 31 14:03:08.972243 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 31 14:03:08.985062 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:41730.service - OpenSSH per-connection server daemon (10.0.0.1:41730). Oct 31 14:03:09.048114 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 41730 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.049695 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.054747 systemd-logind[1600]: New session 2 of user core. Oct 31 14:03:09.064030 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 31 14:03:09.078125 sshd[1756]: Connection closed by 10.0.0.1 port 41730 Oct 31 14:03:09.078470 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:09.097671 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:41730.service: Deactivated successfully. Oct 31 14:03:09.099688 systemd[1]: session-2.scope: Deactivated successfully. Oct 31 14:03:09.100496 systemd-logind[1600]: Session 2 logged out. Waiting for processes to exit. Oct 31 14:03:09.103106 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:41746.service - OpenSSH per-connection server daemon (10.0.0.1:41746). Oct 31 14:03:09.103838 systemd-logind[1600]: Removed session 2. Oct 31 14:03:09.177417 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 41746 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.178918 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.184208 systemd-logind[1600]: New session 3 of user core. Oct 31 14:03:09.204038 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 31 14:03:09.215168 sshd[1765]: Connection closed by 10.0.0.1 port 41746 Oct 31 14:03:09.215503 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:09.226923 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:41746.service: Deactivated successfully. Oct 31 14:03:09.229366 systemd[1]: session-3.scope: Deactivated successfully. Oct 31 14:03:09.230222 systemd-logind[1600]: Session 3 logged out. Waiting for processes to exit. Oct 31 14:03:09.233919 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:41750.service - OpenSSH per-connection server daemon (10.0.0.1:41750). Oct 31 14:03:09.234510 systemd-logind[1600]: Removed session 3. Oct 31 14:03:09.289578 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 41750 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.290256 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.295224 systemd-logind[1600]: New session 4 of user core. Oct 31 14:03:09.305100 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 31 14:03:09.322343 sshd[1776]: Connection closed by 10.0.0.1 port 41750 Oct 31 14:03:09.322729 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:09.332579 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:41750.service: Deactivated successfully. Oct 31 14:03:09.334898 systemd[1]: session-4.scope: Deactivated successfully. Oct 31 14:03:09.335821 systemd-logind[1600]: Session 4 logged out. Waiting for processes to exit. Oct 31 14:03:09.339104 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:41756.service - OpenSSH per-connection server daemon (10.0.0.1:41756). Oct 31 14:03:09.339650 systemd-logind[1600]: Removed session 4. Oct 31 14:03:09.409301 sshd[1782]: Accepted publickey for core from 10.0.0.1 port 41756 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.410817 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.415418 systemd-logind[1600]: New session 5 of user core. Oct 31 14:03:09.428981 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 31 14:03:09.452105 sudo[1786]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 31 14:03:09.452433 sudo[1786]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:03:09.597552 sudo[1786]: pam_unix(sudo:session): session closed for user root Oct 31 14:03:09.599582 sshd[1785]: Connection closed by 10.0.0.1 port 41756 Oct 31 14:03:09.600029 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:09.617634 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:41756.service: Deactivated successfully. Oct 31 14:03:09.620078 systemd[1]: session-5.scope: Deactivated successfully. Oct 31 14:03:09.620992 systemd-logind[1600]: Session 5 logged out. Waiting for processes to exit. Oct 31 14:03:09.624929 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:41760.service - OpenSSH per-connection server daemon (10.0.0.1:41760). Oct 31 14:03:09.625683 systemd-logind[1600]: Removed session 5. Oct 31 14:03:09.687588 sshd[1792]: Accepted publickey for core from 10.0.0.1 port 41760 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.689205 sshd-session[1792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.693934 systemd-logind[1600]: New session 6 of user core. Oct 31 14:03:09.713009 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 31 14:03:09.729134 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 31 14:03:09.729461 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:03:09.737625 sudo[1797]: pam_unix(sudo:session): session closed for user root Oct 31 14:03:09.745832 sudo[1796]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 31 14:03:09.746251 sudo[1796]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:03:09.758694 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 31 14:03:09.818084 augenrules[1819]: No rules Oct 31 14:03:09.819895 systemd[1]: audit-rules.service: Deactivated successfully. Oct 31 14:03:09.820202 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 31 14:03:09.821603 sudo[1796]: pam_unix(sudo:session): session closed for user root Oct 31 14:03:09.823538 sshd[1795]: Connection closed by 10.0.0.1 port 41760 Oct 31 14:03:09.823836 sshd-session[1792]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:09.832583 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:41760.service: Deactivated successfully. Oct 31 14:03:09.834526 systemd[1]: session-6.scope: Deactivated successfully. Oct 31 14:03:09.835322 systemd-logind[1600]: Session 6 logged out. Waiting for processes to exit. Oct 31 14:03:09.838150 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:41774.service - OpenSSH per-connection server daemon (10.0.0.1:41774). Oct 31 14:03:09.839096 systemd-logind[1600]: Removed session 6. Oct 31 14:03:09.887704 sshd[1828]: Accepted publickey for core from 10.0.0.1 port 41774 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:03:09.889037 sshd-session[1828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:03:09.894462 systemd-logind[1600]: New session 7 of user core. Oct 31 14:03:09.908035 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 31 14:03:09.923692 sudo[1832]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 31 14:03:09.924116 sudo[1832]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 31 14:03:10.871463 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 31 14:03:10.905624 (dockerd)[1853]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 31 14:03:11.481267 dockerd[1853]: time="2025-10-31T14:03:11.481191490Z" level=info msg="Starting up" Oct 31 14:03:11.482089 dockerd[1853]: time="2025-10-31T14:03:11.482061878Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 31 14:03:11.499789 dockerd[1853]: time="2025-10-31T14:03:11.499731320Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 31 14:03:12.002264 dockerd[1853]: time="2025-10-31T14:03:12.002194012Z" level=info msg="Loading containers: start." Oct 31 14:03:12.013882 kernel: Initializing XFRM netlink socket Oct 31 14:03:12.354070 systemd-networkd[1520]: docker0: Link UP Oct 31 14:03:12.361089 dockerd[1853]: time="2025-10-31T14:03:12.360966825Z" level=info msg="Loading containers: done." Oct 31 14:03:12.386455 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1527604902-merged.mount: Deactivated successfully. Oct 31 14:03:12.389345 dockerd[1853]: time="2025-10-31T14:03:12.389268526Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 31 14:03:12.389459 dockerd[1853]: time="2025-10-31T14:03:12.389410207Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 31 14:03:12.389585 dockerd[1853]: time="2025-10-31T14:03:12.389544546Z" level=info msg="Initializing buildkit" Oct 31 14:03:12.428814 dockerd[1853]: time="2025-10-31T14:03:12.428751332Z" level=info msg="Completed buildkit initialization" Oct 31 14:03:12.440225 dockerd[1853]: time="2025-10-31T14:03:12.440149690Z" level=info msg="Daemon has completed initialization" Oct 31 14:03:12.440428 dockerd[1853]: time="2025-10-31T14:03:12.440281814Z" level=info msg="API listen on /run/docker.sock" Oct 31 14:03:12.440591 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 31 14:03:13.382179 containerd[1613]: time="2025-10-31T14:03:13.382109899Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 31 14:03:14.070706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135890829.mount: Deactivated successfully. Oct 31 14:03:15.511403 containerd[1613]: time="2025-10-31T14:03:15.511323656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:15.512033 containerd[1613]: time="2025-10-31T14:03:15.511990063Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 31 14:03:15.513268 containerd[1613]: time="2025-10-31T14:03:15.513236831Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:15.516123 containerd[1613]: time="2025-10-31T14:03:15.516085840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:15.517630 containerd[1613]: time="2025-10-31T14:03:15.517574250Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 2.13540713s" Oct 31 14:03:15.517698 containerd[1613]: time="2025-10-31T14:03:15.517639308Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 31 14:03:15.518411 containerd[1613]: time="2025-10-31T14:03:15.518371084Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 31 14:03:17.028808 containerd[1613]: time="2025-10-31T14:03:17.028737240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.030037 containerd[1613]: time="2025-10-31T14:03:17.029956377Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 31 14:03:17.031456 containerd[1613]: time="2025-10-31T14:03:17.031411759Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.034458 containerd[1613]: time="2025-10-31T14:03:17.034422752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.035726 containerd[1613]: time="2025-10-31T14:03:17.035651224Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.517244623s" Oct 31 14:03:17.035726 containerd[1613]: time="2025-10-31T14:03:17.035714895Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 31 14:03:17.036384 containerd[1613]: time="2025-10-31T14:03:17.036331382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 31 14:03:17.938495 containerd[1613]: time="2025-10-31T14:03:17.938427887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.939174 containerd[1613]: time="2025-10-31T14:03:17.939140413Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 31 14:03:17.940420 containerd[1613]: time="2025-10-31T14:03:17.940348409Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.943544 containerd[1613]: time="2025-10-31T14:03:17.943510268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:17.944945 containerd[1613]: time="2025-10-31T14:03:17.944885775Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 908.514366ms" Oct 31 14:03:17.945007 containerd[1613]: time="2025-10-31T14:03:17.944947962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 31 14:03:17.945507 containerd[1613]: time="2025-10-31T14:03:17.945463137Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 31 14:03:18.956724 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 31 14:03:18.959366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:19.445241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096375796.mount: Deactivated successfully. Oct 31 14:03:19.496597 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:19.516183 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 31 14:03:19.751996 kubelet[2151]: E1031 14:03:19.751801 2151 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 31 14:03:19.759240 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 31 14:03:19.759444 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 31 14:03:19.760194 systemd[1]: kubelet.service: Consumed 350ms CPU time, 108.8M memory peak. Oct 31 14:03:20.191701 containerd[1613]: time="2025-10-31T14:03:20.191529610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:20.192512 containerd[1613]: time="2025-10-31T14:03:20.192473187Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 31 14:03:20.193641 containerd[1613]: time="2025-10-31T14:03:20.193598047Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:20.195598 containerd[1613]: time="2025-10-31T14:03:20.195557269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:20.196123 containerd[1613]: time="2025-10-31T14:03:20.196065537Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.250573242s" Oct 31 14:03:20.196166 containerd[1613]: time="2025-10-31T14:03:20.196125233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 31 14:03:20.196686 containerd[1613]: time="2025-10-31T14:03:20.196660583Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 31 14:03:20.804585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1383550780.mount: Deactivated successfully. Oct 31 14:03:22.185381 containerd[1613]: time="2025-10-31T14:03:22.185305799Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.186000 containerd[1613]: time="2025-10-31T14:03:22.185954876Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 31 14:03:22.187264 containerd[1613]: time="2025-10-31T14:03:22.187212266Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.189703 containerd[1613]: time="2025-10-31T14:03:22.189662670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.191099 containerd[1613]: time="2025-10-31T14:03:22.191052089Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.994361066s" Oct 31 14:03:22.191137 containerd[1613]: time="2025-10-31T14:03:22.191100884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 31 14:03:22.191706 containerd[1613]: time="2025-10-31T14:03:22.191682418Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 31 14:03:22.624058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1448179453.mount: Deactivated successfully. Oct 31 14:03:22.631068 containerd[1613]: time="2025-10-31T14:03:22.631029571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.631917 containerd[1613]: time="2025-10-31T14:03:22.631871775Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 31 14:03:22.633235 containerd[1613]: time="2025-10-31T14:03:22.633201521Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.635496 containerd[1613]: time="2025-10-31T14:03:22.635447630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:22.636247 containerd[1613]: time="2025-10-31T14:03:22.636188763Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 444.478474ms" Oct 31 14:03:22.636291 containerd[1613]: time="2025-10-31T14:03:22.636241499Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 31 14:03:22.636732 containerd[1613]: time="2025-10-31T14:03:22.636706814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 31 14:03:26.216088 containerd[1613]: time="2025-10-31T14:03:26.215989325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:26.217242 containerd[1613]: time="2025-10-31T14:03:26.217196357Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 31 14:03:26.219492 containerd[1613]: time="2025-10-31T14:03:26.219437799Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:26.223037 containerd[1613]: time="2025-10-31T14:03:26.223002219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:26.224394 containerd[1613]: time="2025-10-31T14:03:26.224337368Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.587602924s" Oct 31 14:03:26.224394 containerd[1613]: time="2025-10-31T14:03:26.224373238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 31 14:03:29.120452 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:29.120627 systemd[1]: kubelet.service: Consumed 350ms CPU time, 108.8M memory peak. Oct 31 14:03:29.122844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:29.170290 systemd[1]: Reload requested from client PID 2292 ('systemctl') (unit session-7.scope)... Oct 31 14:03:29.170307 systemd[1]: Reloading... Oct 31 14:03:29.273900 zram_generator::config[2339]: No configuration found. Oct 31 14:03:29.565145 systemd[1]: Reloading finished in 394 ms. Oct 31 14:03:29.631591 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 31 14:03:29.631694 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 31 14:03:29.632024 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:29.632072 systemd[1]: kubelet.service: Consumed 160ms CPU time, 98.1M memory peak. Oct 31 14:03:29.633655 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:29.820219 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:29.844307 (kubelet)[2384]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 14:03:29.903547 kubelet[2384]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 14:03:29.903547 kubelet[2384]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:03:29.903996 kubelet[2384]: I1031 14:03:29.903612 2384 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 14:03:30.459337 kubelet[2384]: I1031 14:03:30.459273 2384 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 14:03:30.459337 kubelet[2384]: I1031 14:03:30.459316 2384 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 14:03:30.461098 kubelet[2384]: I1031 14:03:30.461068 2384 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 14:03:30.461098 kubelet[2384]: I1031 14:03:30.461089 2384 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 14:03:30.461362 kubelet[2384]: I1031 14:03:30.461337 2384 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 14:03:31.000698 kubelet[2384]: E1031 14:03:31.000641 2384 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 31 14:03:31.001226 kubelet[2384]: I1031 14:03:31.000790 2384 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 14:03:31.007452 kubelet[2384]: I1031 14:03:31.007409 2384 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 14:03:31.013876 kubelet[2384]: I1031 14:03:31.013705 2384 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 14:03:31.014616 kubelet[2384]: I1031 14:03:31.014555 2384 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 14:03:31.097491 kubelet[2384]: I1031 14:03:31.014593 2384 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 14:03:31.097491 kubelet[2384]: I1031 14:03:31.097481 2384 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 14:03:31.097491 kubelet[2384]: I1031 14:03:31.097507 2384 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 14:03:31.097831 kubelet[2384]: I1031 14:03:31.097736 2384 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 14:03:31.101803 kubelet[2384]: I1031 14:03:31.101770 2384 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:03:31.102603 kubelet[2384]: I1031 14:03:31.102559 2384 kubelet.go:475] "Attempting to sync node with API server" Oct 31 14:03:31.102603 kubelet[2384]: I1031 14:03:31.102602 2384 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 14:03:31.102670 kubelet[2384]: I1031 14:03:31.102645 2384 kubelet.go:387] "Adding apiserver pod source" Oct 31 14:03:31.102708 kubelet[2384]: I1031 14:03:31.102684 2384 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 14:03:31.107551 kubelet[2384]: E1031 14:03:31.106619 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 14:03:31.107551 kubelet[2384]: I1031 14:03:31.106648 2384 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 14:03:31.107551 kubelet[2384]: E1031 14:03:31.106762 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 14:03:31.107551 kubelet[2384]: I1031 14:03:31.107274 2384 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 14:03:31.107551 kubelet[2384]: I1031 14:03:31.107305 2384 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 14:03:31.107551 kubelet[2384]: W1031 14:03:31.107389 2384 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 31 14:03:31.111883 kubelet[2384]: I1031 14:03:31.111805 2384 server.go:1262] "Started kubelet" Oct 31 14:03:31.112030 kubelet[2384]: I1031 14:03:31.112005 2384 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 14:03:31.112782 kubelet[2384]: I1031 14:03:31.112758 2384 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 14:03:31.113298 kubelet[2384]: I1031 14:03:31.113278 2384 server.go:310] "Adding debug handlers to kubelet server" Oct 31 14:03:31.113541 kubelet[2384]: I1031 14:03:31.113517 2384 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 14:03:31.114922 kubelet[2384]: I1031 14:03:31.114863 2384 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 14:03:31.114987 kubelet[2384]: I1031 14:03:31.114939 2384 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 14:03:31.115298 kubelet[2384]: I1031 14:03:31.115271 2384 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 14:03:31.115563 kubelet[2384]: I1031 14:03:31.115538 2384 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 14:03:31.115791 kubelet[2384]: E1031 14:03:31.115757 2384 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 31 14:03:31.116134 kubelet[2384]: E1031 14:03:31.116101 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Oct 31 14:03:31.116297 kubelet[2384]: I1031 14:03:31.116166 2384 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 14:03:31.116297 kubelet[2384]: I1031 14:03:31.116250 2384 reconciler.go:29] "Reconciler: start to sync state" Oct 31 14:03:31.116636 kubelet[2384]: E1031 14:03:31.116613 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 14:03:31.117380 kubelet[2384]: E1031 14:03:31.115333 2384 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873985e8e2e84dc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-31 14:03:31.11175702 +0000 UTC m=+1.251186949,LastTimestamp:2025-10-31 14:03:31.11175702 +0000 UTC m=+1.251186949,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 31 14:03:31.117651 kubelet[2384]: I1031 14:03:31.117616 2384 factory.go:223] Registration of the systemd container factory successfully Oct 31 14:03:31.118013 kubelet[2384]: I1031 14:03:31.117738 2384 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 14:03:31.119169 kubelet[2384]: I1031 14:03:31.119139 2384 factory.go:223] Registration of the containerd container factory successfully Oct 31 14:03:31.119331 kubelet[2384]: E1031 14:03:31.119309 2384 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 31 14:03:31.136242 kubelet[2384]: I1031 14:03:31.136208 2384 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 14:03:31.136242 kubelet[2384]: I1031 14:03:31.136225 2384 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 14:03:31.136242 kubelet[2384]: I1031 14:03:31.136248 2384 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:03:31.139955 kubelet[2384]: I1031 14:03:31.139510 2384 policy_none.go:49] "None policy: Start" Oct 31 14:03:31.139955 kubelet[2384]: I1031 14:03:31.139555 2384 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 14:03:31.139955 kubelet[2384]: I1031 14:03:31.139581 2384 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 14:03:31.141076 kubelet[2384]: I1031 14:03:31.141029 2384 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 14:03:31.142889 kubelet[2384]: I1031 14:03:31.142713 2384 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 14:03:31.142889 kubelet[2384]: I1031 14:03:31.142759 2384 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 14:03:31.142889 kubelet[2384]: I1031 14:03:31.142772 2384 policy_none.go:47] "Start" Oct 31 14:03:31.142889 kubelet[2384]: I1031 14:03:31.142798 2384 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 14:03:31.143184 kubelet[2384]: E1031 14:03:31.142841 2384 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 14:03:31.144242 kubelet[2384]: E1031 14:03:31.144204 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 14:03:31.148256 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 31 14:03:31.166050 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 31 14:03:31.169911 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 31 14:03:31.183654 kubelet[2384]: E1031 14:03:31.183603 2384 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 14:03:31.183961 kubelet[2384]: I1031 14:03:31.183936 2384 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 14:03:31.184042 kubelet[2384]: I1031 14:03:31.183955 2384 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 14:03:31.184270 kubelet[2384]: I1031 14:03:31.184241 2384 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 14:03:31.185171 kubelet[2384]: E1031 14:03:31.185150 2384 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 14:03:31.185237 kubelet[2384]: E1031 14:03:31.185198 2384 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 31 14:03:31.256227 systemd[1]: Created slice kubepods-burstable-pod239924950a1a68124e5c2e21e36567a0.slice - libcontainer container kubepods-burstable-pod239924950a1a68124e5c2e21e36567a0.slice. Oct 31 14:03:31.268759 kubelet[2384]: E1031 14:03:31.268704 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:31.272335 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 31 14:03:31.283186 kubelet[2384]: E1031 14:03:31.283133 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:31.286092 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 31 14:03:31.286437 kubelet[2384]: I1031 14:03:31.286404 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:31.286827 kubelet[2384]: E1031 14:03:31.286782 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Oct 31 14:03:31.288179 kubelet[2384]: E1031 14:03:31.288162 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:31.316913 kubelet[2384]: E1031 14:03:31.316803 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Oct 31 14:03:31.318076 kubelet[2384]: I1031 14:03:31.318032 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:31.318173 kubelet[2384]: I1031 14:03:31.318080 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:31.318173 kubelet[2384]: I1031 14:03:31.318096 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:31.318173 kubelet[2384]: I1031 14:03:31.318113 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:31.318173 kubelet[2384]: I1031 14:03:31.318148 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:31.318313 kubelet[2384]: I1031 14:03:31.318224 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:31.318435 kubelet[2384]: I1031 14:03:31.318307 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:31.318435 kubelet[2384]: I1031 14:03:31.318341 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:31.318435 kubelet[2384]: I1031 14:03:31.318386 2384 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:31.488926 kubelet[2384]: I1031 14:03:31.488876 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:31.489329 kubelet[2384]: E1031 14:03:31.489285 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Oct 31 14:03:31.572925 kubelet[2384]: E1031 14:03:31.572726 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:31.574069 containerd[1613]: time="2025-10-31T14:03:31.574009150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:239924950a1a68124e5c2e21e36567a0,Namespace:kube-system,Attempt:0,}" Oct 31 14:03:31.587246 kubelet[2384]: E1031 14:03:31.587184 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:31.587797 containerd[1613]: time="2025-10-31T14:03:31.587727334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 31 14:03:31.592089 kubelet[2384]: E1031 14:03:31.592043 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:31.592455 containerd[1613]: time="2025-10-31T14:03:31.592406748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 31 14:03:31.717487 kubelet[2384]: E1031 14:03:31.717426 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Oct 31 14:03:31.891872 kubelet[2384]: I1031 14:03:31.891712 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:31.892228 kubelet[2384]: E1031 14:03:31.892162 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Oct 31 14:03:31.925600 kubelet[2384]: E1031 14:03:31.925497 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 31 14:03:32.182901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3815450522.mount: Deactivated successfully. Oct 31 14:03:32.189599 containerd[1613]: time="2025-10-31T14:03:32.189538625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:03:32.190545 containerd[1613]: time="2025-10-31T14:03:32.190480738Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 31 14:03:32.192385 containerd[1613]: time="2025-10-31T14:03:32.192352532Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:03:32.196450 containerd[1613]: time="2025-10-31T14:03:32.196394413Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:03:32.197534 containerd[1613]: time="2025-10-31T14:03:32.197493820Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:03:32.198569 containerd[1613]: time="2025-10-31T14:03:32.198499082Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 14:03:32.199526 containerd[1613]: time="2025-10-31T14:03:32.199472419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 31 14:03:32.200348 containerd[1613]: time="2025-10-31T14:03:32.200300941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 622.789699ms" Oct 31 14:03:32.200675 containerd[1613]: time="2025-10-31T14:03:32.200635617Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 31 14:03:32.204526 containerd[1613]: time="2025-10-31T14:03:32.204495946Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 608.358268ms" Oct 31 14:03:32.205228 kubelet[2384]: E1031 14:03:32.205168 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 31 14:03:32.207303 containerd[1613]: time="2025-10-31T14:03:32.207256627Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 615.482875ms" Oct 31 14:03:32.240642 containerd[1613]: time="2025-10-31T14:03:32.240556766Z" level=info msg="connecting to shim 826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed" address="unix:///run/containerd/s/f53c0452ad1750e45767d6b8574fcbde7570c5de14a72b14291257736c14b10e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:32.294344 kubelet[2384]: E1031 14:03:32.294295 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 31 14:03:32.299018 systemd[1]: Started cri-containerd-826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed.scope - libcontainer container 826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed. Oct 31 14:03:32.495164 containerd[1613]: time="2025-10-31T14:03:32.495014211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:239924950a1a68124e5c2e21e36567a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed\"" Oct 31 14:03:32.497871 kubelet[2384]: E1031 14:03:32.497772 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:32.508634 containerd[1613]: time="2025-10-31T14:03:32.508570176Z" level=info msg="connecting to shim 2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a" address="unix:///run/containerd/s/883c640f0a07d72635d6b5294e02fd1847125f0cb80395a65ed027da233b2ced" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:32.518550 kubelet[2384]: E1031 14:03:32.518508 2384 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="1.6s" Oct 31 14:03:32.544981 systemd[1]: Started cri-containerd-2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a.scope - libcontainer container 2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a. Oct 31 14:03:32.565414 kubelet[2384]: E1031 14:03:32.565374 2384 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 31 14:03:32.604827 containerd[1613]: time="2025-10-31T14:03:32.604775348Z" level=info msg="CreateContainer within sandbox \"826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 31 14:03:32.607405 containerd[1613]: time="2025-10-31T14:03:32.607373402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a\"" Oct 31 14:03:32.608121 kubelet[2384]: E1031 14:03:32.608093 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:32.615045 containerd[1613]: time="2025-10-31T14:03:32.614995634Z" level=info msg="connecting to shim 0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2" address="unix:///run/containerd/s/6cc525c1220721c45f0f26048c85f10d3ab39c63ff237529a7eddb5d5be9344b" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:32.615183 containerd[1613]: time="2025-10-31T14:03:32.615140959Z" level=info msg="CreateContainer within sandbox \"2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 31 14:03:32.623359 containerd[1613]: time="2025-10-31T14:03:32.623303939Z" level=info msg="Container 9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:32.633077 containerd[1613]: time="2025-10-31T14:03:32.633024485Z" level=info msg="Container 453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:32.639582 containerd[1613]: time="2025-10-31T14:03:32.639529759Z" level=info msg="CreateContainer within sandbox \"826dea25074d77dfbdb3a58a23e207c1c7ea230471e4a0f5361bb5dd51febbed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4\"" Oct 31 14:03:32.641891 containerd[1613]: time="2025-10-31T14:03:32.640297195Z" level=info msg="StartContainer for \"9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4\"" Oct 31 14:03:32.641891 containerd[1613]: time="2025-10-31T14:03:32.641615142Z" level=info msg="connecting to shim 9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4" address="unix:///run/containerd/s/f53c0452ad1750e45767d6b8574fcbde7570c5de14a72b14291257736c14b10e" protocol=ttrpc version=3 Oct 31 14:03:32.642067 systemd[1]: Started cri-containerd-0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2.scope - libcontainer container 0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2. Oct 31 14:03:32.646575 containerd[1613]: time="2025-10-31T14:03:32.646519335Z" level=info msg="CreateContainer within sandbox \"2ae6f5c8c35e76f23f2c53fe1d5a2b9d2c97c451cc73f9e3a75e59fc4e287b0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1\"" Oct 31 14:03:32.647815 containerd[1613]: time="2025-10-31T14:03:32.647753623Z" level=info msg="StartContainer for \"453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1\"" Oct 31 14:03:32.657923 containerd[1613]: time="2025-10-31T14:03:32.649267455Z" level=info msg="connecting to shim 453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1" address="unix:///run/containerd/s/883c640f0a07d72635d6b5294e02fd1847125f0cb80395a65ed027da233b2ced" protocol=ttrpc version=3 Oct 31 14:03:32.694208 systemd[1]: Started cri-containerd-453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1.scope - libcontainer container 453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1. Oct 31 14:03:32.695878 kubelet[2384]: I1031 14:03:32.694945 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:32.695878 kubelet[2384]: E1031 14:03:32.695472 2384 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Oct 31 14:03:32.705041 systemd[1]: Started cri-containerd-9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4.scope - libcontainer container 9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4. Oct 31 14:03:32.738964 containerd[1613]: time="2025-10-31T14:03:32.738905126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2\"" Oct 31 14:03:32.742185 kubelet[2384]: E1031 14:03:32.742136 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:32.749734 containerd[1613]: time="2025-10-31T14:03:32.748428293Z" level=info msg="CreateContainer within sandbox \"0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 31 14:03:32.760876 containerd[1613]: time="2025-10-31T14:03:32.760805061Z" level=info msg="Container f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:32.776088 containerd[1613]: time="2025-10-31T14:03:32.776014322Z" level=info msg="StartContainer for \"453dcf26eb752443dbd60830fc14631d89803d752902916ba7ed126ccbe0e2d1\" returns successfully" Oct 31 14:03:32.781712 containerd[1613]: time="2025-10-31T14:03:32.781663768Z" level=info msg="CreateContainer within sandbox \"0d891e80d6a301f4c6a0fcb360ae497f976e8d23de34d8d165888865827a08f2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9\"" Oct 31 14:03:32.783221 containerd[1613]: time="2025-10-31T14:03:32.783196777Z" level=info msg="StartContainer for \"f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9\"" Oct 31 14:03:32.784478 containerd[1613]: time="2025-10-31T14:03:32.784448055Z" level=info msg="connecting to shim f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9" address="unix:///run/containerd/s/6cc525c1220721c45f0f26048c85f10d3ab39c63ff237529a7eddb5d5be9344b" protocol=ttrpc version=3 Oct 31 14:03:32.794479 containerd[1613]: time="2025-10-31T14:03:32.794412391Z" level=info msg="StartContainer for \"9d917519b85f54ad4e881a19bbc64cdb6b3be0e42d234f7cf039c3d2c7f9b3e4\" returns successfully" Oct 31 14:03:32.814658 systemd[1]: Started cri-containerd-f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9.scope - libcontainer container f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9. Oct 31 14:03:32.906804 containerd[1613]: time="2025-10-31T14:03:32.906763333Z" level=info msg="StartContainer for \"f15f44588f11279e90e9572256bf863295526fa95bb3cc2fce83523185b6dde9\" returns successfully" Oct 31 14:03:33.159421 kubelet[2384]: E1031 14:03:33.159276 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:33.159689 kubelet[2384]: E1031 14:03:33.159456 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:33.160265 kubelet[2384]: E1031 14:03:33.160240 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:33.160384 kubelet[2384]: E1031 14:03:33.160360 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:33.163413 kubelet[2384]: E1031 14:03:33.163384 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:33.163544 kubelet[2384]: E1031 14:03:33.163522 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:34.167171 kubelet[2384]: E1031 14:03:34.167125 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:34.167761 kubelet[2384]: E1031 14:03:34.167271 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:34.167895 kubelet[2384]: E1031 14:03:34.167833 2384 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 31 14:03:34.167980 kubelet[2384]: E1031 14:03:34.167961 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:34.297384 kubelet[2384]: I1031 14:03:34.297346 2384 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:34.424030 kubelet[2384]: E1031 14:03:34.423374 2384 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 31 14:03:34.541030 kubelet[2384]: I1031 14:03:34.540978 2384 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 14:03:34.616658 kubelet[2384]: I1031 14:03:34.616595 2384 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:34.623520 kubelet[2384]: E1031 14:03:34.623454 2384 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:34.623520 kubelet[2384]: I1031 14:03:34.623498 2384 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:34.626324 kubelet[2384]: E1031 14:03:34.626274 2384 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:34.626324 kubelet[2384]: I1031 14:03:34.626324 2384 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:34.628809 kubelet[2384]: E1031 14:03:34.628735 2384 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:35.106223 kubelet[2384]: I1031 14:03:35.106138 2384 apiserver.go:52] "Watching apiserver" Oct 31 14:03:35.116866 kubelet[2384]: I1031 14:03:35.116800 2384 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 14:03:35.165992 kubelet[2384]: I1031 14:03:35.165965 2384 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:35.168235 kubelet[2384]: E1031 14:03:35.168166 2384 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:35.168729 kubelet[2384]: E1031 14:03:35.168339 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:35.395390 kubelet[2384]: I1031 14:03:35.395178 2384 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:35.403594 kubelet[2384]: E1031 14:03:35.403560 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:36.167229 kubelet[2384]: E1031 14:03:36.167183 2384 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:36.269607 systemd[1]: Reload requested from client PID 2674 ('systemctl') (unit session-7.scope)... Oct 31 14:03:36.269623 systemd[1]: Reloading... Oct 31 14:03:36.353726 zram_generator::config[2715]: No configuration found. Oct 31 14:03:36.581533 systemd[1]: Reloading finished in 311 ms. Oct 31 14:03:36.608673 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:36.628136 systemd[1]: kubelet.service: Deactivated successfully. Oct 31 14:03:36.628492 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:36.628552 systemd[1]: kubelet.service: Consumed 1.287s CPU time, 126.5M memory peak. Oct 31 14:03:36.631065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 31 14:03:36.839126 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 31 14:03:36.843640 (kubelet)[2763]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 31 14:03:36.891127 kubelet[2763]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 31 14:03:36.891127 kubelet[2763]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 31 14:03:36.891553 kubelet[2763]: I1031 14:03:36.891229 2763 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 31 14:03:36.897968 kubelet[2763]: I1031 14:03:36.897906 2763 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 31 14:03:36.897968 kubelet[2763]: I1031 14:03:36.897931 2763 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 31 14:03:36.897968 kubelet[2763]: I1031 14:03:36.897958 2763 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 31 14:03:36.897968 kubelet[2763]: I1031 14:03:36.897970 2763 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 31 14:03:36.898260 kubelet[2763]: I1031 14:03:36.898135 2763 server.go:956] "Client rotation is on, will bootstrap in background" Oct 31 14:03:36.899201 kubelet[2763]: I1031 14:03:36.899169 2763 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 31 14:03:36.902033 kubelet[2763]: I1031 14:03:36.901996 2763 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 31 14:03:36.905090 kubelet[2763]: I1031 14:03:36.905070 2763 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 31 14:03:36.912773 kubelet[2763]: I1031 14:03:36.911136 2763 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 31 14:03:36.912773 kubelet[2763]: I1031 14:03:36.911369 2763 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 31 14:03:36.912773 kubelet[2763]: I1031 14:03:36.911400 2763 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 31 14:03:36.912773 kubelet[2763]: I1031 14:03:36.911541 2763 topology_manager.go:138] "Creating topology manager with none policy" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.911550 2763 container_manager_linux.go:306] "Creating device plugin manager" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.911574 2763 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.912346 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.912535 2763 kubelet.go:475] "Attempting to sync node with API server" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.912548 2763 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.912568 2763 kubelet.go:387] "Adding apiserver pod source" Oct 31 14:03:36.913159 kubelet[2763]: I1031 14:03:36.912589 2763 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 31 14:03:36.917875 kubelet[2763]: I1031 14:03:36.917432 2763 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 31 14:03:36.918537 kubelet[2763]: I1031 14:03:36.918514 2763 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 31 14:03:36.918537 kubelet[2763]: I1031 14:03:36.918545 2763 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 31 14:03:36.921346 kubelet[2763]: I1031 14:03:36.921317 2763 server.go:1262] "Started kubelet" Oct 31 14:03:36.921425 kubelet[2763]: I1031 14:03:36.921375 2763 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 31 14:03:36.922184 kubelet[2763]: I1031 14:03:36.922163 2763 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 31 14:03:36.922565 kubelet[2763]: I1031 14:03:36.922522 2763 server.go:310] "Adding debug handlers to kubelet server" Oct 31 14:03:36.923149 kubelet[2763]: I1031 14:03:36.923098 2763 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 31 14:03:36.924134 kubelet[2763]: I1031 14:03:36.924092 2763 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 31 14:03:36.924364 kubelet[2763]: I1031 14:03:36.924198 2763 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 31 14:03:36.924364 kubelet[2763]: I1031 14:03:36.924331 2763 reconciler.go:29] "Reconciler: start to sync state" Oct 31 14:03:36.926887 kubelet[2763]: I1031 14:03:36.925245 2763 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 31 14:03:36.927112 kubelet[2763]: I1031 14:03:36.927092 2763 factory.go:223] Registration of the containerd container factory successfully Oct 31 14:03:36.927112 kubelet[2763]: I1031 14:03:36.927110 2763 factory.go:223] Registration of the systemd container factory successfully Oct 31 14:03:36.929748 kubelet[2763]: I1031 14:03:36.922514 2763 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 31 14:03:36.929918 kubelet[2763]: I1031 14:03:36.929775 2763 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 31 14:03:36.929989 kubelet[2763]: I1031 14:03:36.929971 2763 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 31 14:03:36.936183 kubelet[2763]: I1031 14:03:36.936133 2763 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 31 14:03:36.944802 kubelet[2763]: I1031 14:03:36.944755 2763 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 31 14:03:36.944802 kubelet[2763]: I1031 14:03:36.944783 2763 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 31 14:03:36.944802 kubelet[2763]: I1031 14:03:36.944812 2763 kubelet.go:2427] "Starting kubelet main sync loop" Oct 31 14:03:36.945017 kubelet[2763]: E1031 14:03:36.944892 2763 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 31 14:03:36.971289 kubelet[2763]: I1031 14:03:36.971254 2763 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 31 14:03:36.971289 kubelet[2763]: I1031 14:03:36.971274 2763 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 31 14:03:36.971289 kubelet[2763]: I1031 14:03:36.971298 2763 state_mem.go:36] "Initialized new in-memory state store" Oct 31 14:03:36.971487 kubelet[2763]: I1031 14:03:36.971466 2763 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 31 14:03:36.971487 kubelet[2763]: I1031 14:03:36.971476 2763 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 31 14:03:36.971542 kubelet[2763]: I1031 14:03:36.971494 2763 policy_none.go:49] "None policy: Start" Oct 31 14:03:36.971542 kubelet[2763]: I1031 14:03:36.971505 2763 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 31 14:03:36.971542 kubelet[2763]: I1031 14:03:36.971517 2763 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 31 14:03:36.971634 kubelet[2763]: I1031 14:03:36.971615 2763 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 31 14:03:36.971634 kubelet[2763]: I1031 14:03:36.971629 2763 policy_none.go:47] "Start" Oct 31 14:03:36.976121 kubelet[2763]: E1031 14:03:36.976083 2763 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 31 14:03:36.976310 kubelet[2763]: I1031 14:03:36.976271 2763 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 31 14:03:36.976310 kubelet[2763]: I1031 14:03:36.976290 2763 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 31 14:03:36.976649 kubelet[2763]: I1031 14:03:36.976623 2763 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 31 14:03:36.977503 kubelet[2763]: E1031 14:03:36.977483 2763 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 31 14:03:37.046392 kubelet[2763]: I1031 14:03:37.046312 2763 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:37.046645 kubelet[2763]: I1031 14:03:37.046530 2763 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:37.046908 kubelet[2763]: I1031 14:03:37.046873 2763 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.070936 kubelet[2763]: E1031 14:03:37.070895 2763 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.086229 kubelet[2763]: I1031 14:03:37.086206 2763 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 31 14:03:37.094136 kubelet[2763]: I1031 14:03:37.093282 2763 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 31 14:03:37.094136 kubelet[2763]: I1031 14:03:37.093360 2763 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 31 14:03:37.225236 kubelet[2763]: I1031 14:03:37.225188 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:37.225236 kubelet[2763]: I1031 14:03:37.225230 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:37.225236 kubelet[2763]: I1031 14:03:37.225258 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/239924950a1a68124e5c2e21e36567a0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"239924950a1a68124e5c2e21e36567a0\") " pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:37.225499 kubelet[2763]: I1031 14:03:37.225279 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.225499 kubelet[2763]: I1031 14:03:37.225295 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.225499 kubelet[2763]: I1031 14:03:37.225312 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.225499 kubelet[2763]: I1031 14:03:37.225334 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:37.225499 kubelet[2763]: I1031 14:03:37.225351 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.225666 kubelet[2763]: I1031 14:03:37.225408 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 31 14:03:37.371600 kubelet[2763]: E1031 14:03:37.371462 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:37.371600 kubelet[2763]: E1031 14:03:37.371494 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:37.371600 kubelet[2763]: E1031 14:03:37.371462 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:37.913531 kubelet[2763]: I1031 14:03:37.913300 2763 apiserver.go:52] "Watching apiserver" Oct 31 14:03:37.924305 kubelet[2763]: I1031 14:03:37.924264 2763 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 31 14:03:37.954256 kubelet[2763]: I1031 14:03:37.954106 2763 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:37.954256 kubelet[2763]: I1031 14:03:37.954176 2763 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:37.954487 kubelet[2763]: E1031 14:03:37.954460 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:38.151001 kubelet[2763]: E1031 14:03:38.148986 2763 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 31 14:03:38.151001 kubelet[2763]: E1031 14:03:38.149169 2763 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 31 14:03:38.151001 kubelet[2763]: E1031 14:03:38.149302 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:38.151001 kubelet[2763]: E1031 14:03:38.149400 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:38.164420 kubelet[2763]: I1031 14:03:38.163696 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.1636670470000001 podStartE2EDuration="1.163667047s" podCreationTimestamp="2025-10-31 14:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:03:38.149539307 +0000 UTC m=+1.302144765" watchObservedRunningTime="2025-10-31 14:03:38.163667047 +0000 UTC m=+1.316272515" Oct 31 14:03:38.166086 kubelet[2763]: I1031 14:03:38.165891 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.165876292 podStartE2EDuration="1.165876292s" podCreationTimestamp="2025-10-31 14:03:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:03:38.164225748 +0000 UTC m=+1.316831216" watchObservedRunningTime="2025-10-31 14:03:38.165876292 +0000 UTC m=+1.318481790" Oct 31 14:03:38.193167 kubelet[2763]: I1031 14:03:38.193094 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.193073345 podStartE2EDuration="3.193073345s" podCreationTimestamp="2025-10-31 14:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:03:38.176273058 +0000 UTC m=+1.328878526" watchObservedRunningTime="2025-10-31 14:03:38.193073345 +0000 UTC m=+1.345678813" Oct 31 14:03:38.957779 kubelet[2763]: E1031 14:03:38.957540 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:38.958872 kubelet[2763]: E1031 14:03:38.957085 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:39.958263 kubelet[2763]: E1031 14:03:39.958213 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:42.223552 kubelet[2763]: I1031 14:03:42.223507 2763 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 31 14:03:42.224079 kubelet[2763]: I1031 14:03:42.223970 2763 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 31 14:03:42.224125 containerd[1613]: time="2025-10-31T14:03:42.223805711Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 31 14:03:42.247817 kubelet[2763]: E1031 14:03:42.247764 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:42.963096 kubelet[2763]: E1031 14:03:42.963056 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.068090 systemd[1]: Created slice kubepods-besteffort-pod0aa562c1_d08a_471a_9a18_9555c590fca3.slice - libcontainer container kubepods-besteffort-pod0aa562c1_d08a_471a_9a18_9555c590fca3.slice. Oct 31 14:03:43.164470 kubelet[2763]: I1031 14:03:43.164410 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0aa562c1-d08a-471a-9a18-9555c590fca3-xtables-lock\") pod \"kube-proxy-p65h9\" (UID: \"0aa562c1-d08a-471a-9a18-9555c590fca3\") " pod="kube-system/kube-proxy-p65h9" Oct 31 14:03:43.164470 kubelet[2763]: I1031 14:03:43.164452 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0aa562c1-d08a-471a-9a18-9555c590fca3-lib-modules\") pod \"kube-proxy-p65h9\" (UID: \"0aa562c1-d08a-471a-9a18-9555c590fca3\") " pod="kube-system/kube-proxy-p65h9" Oct 31 14:03:43.164470 kubelet[2763]: I1031 14:03:43.164474 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjpb\" (UniqueName: \"kubernetes.io/projected/0aa562c1-d08a-471a-9a18-9555c590fca3-kube-api-access-qxjpb\") pod \"kube-proxy-p65h9\" (UID: \"0aa562c1-d08a-471a-9a18-9555c590fca3\") " pod="kube-system/kube-proxy-p65h9" Oct 31 14:03:43.164713 kubelet[2763]: I1031 14:03:43.164491 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0aa562c1-d08a-471a-9a18-9555c590fca3-kube-proxy\") pod \"kube-proxy-p65h9\" (UID: \"0aa562c1-d08a-471a-9a18-9555c590fca3\") " pod="kube-system/kube-proxy-p65h9" Oct 31 14:03:43.298278 kubelet[2763]: E1031 14:03:43.298135 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.394476 kubelet[2763]: E1031 14:03:43.394422 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.395184 containerd[1613]: time="2025-10-31T14:03:43.395142574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p65h9,Uid:0aa562c1-d08a-471a-9a18-9555c590fca3,Namespace:kube-system,Attempt:0,}" Oct 31 14:03:43.418467 containerd[1613]: time="2025-10-31T14:03:43.418409793Z" level=info msg="connecting to shim 3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab" address="unix:///run/containerd/s/bca2338d0a3766c98529e559d0f6ad4bffdcc6708fba3524d892f2502dfc6eac" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:43.444030 systemd[1]: Started cri-containerd-3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab.scope - libcontainer container 3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab. Oct 31 14:03:43.479634 systemd[1]: Created slice kubepods-besteffort-podff2a5bbb_73ec_44fe_bc12_fa06750ec8e0.slice - libcontainer container kubepods-besteffort-podff2a5bbb_73ec_44fe_bc12_fa06750ec8e0.slice. Oct 31 14:03:43.528931 containerd[1613]: time="2025-10-31T14:03:43.528887373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p65h9,Uid:0aa562c1-d08a-471a-9a18-9555c590fca3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab\"" Oct 31 14:03:43.529742 kubelet[2763]: E1031 14:03:43.529704 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.539591 containerd[1613]: time="2025-10-31T14:03:43.539545946Z" level=info msg="CreateContainer within sandbox \"3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 31 14:03:43.549641 containerd[1613]: time="2025-10-31T14:03:43.549557436Z" level=info msg="Container a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:43.553690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571557393.mount: Deactivated successfully. Oct 31 14:03:43.559922 containerd[1613]: time="2025-10-31T14:03:43.559882425Z" level=info msg="CreateContainer within sandbox \"3c82f39258ef1e86634f716cd1b65007864cff91f2a30c5bb3334780b89c65ab\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27\"" Oct 31 14:03:43.560339 containerd[1613]: time="2025-10-31T14:03:43.560259786Z" level=info msg="StartContainer for \"a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27\"" Oct 31 14:03:43.561714 containerd[1613]: time="2025-10-31T14:03:43.561683664Z" level=info msg="connecting to shim a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27" address="unix:///run/containerd/s/bca2338d0a3766c98529e559d0f6ad4bffdcc6708fba3524d892f2502dfc6eac" protocol=ttrpc version=3 Oct 31 14:03:43.567334 kubelet[2763]: I1031 14:03:43.567256 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-lr6th\" (UID: \"ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lr6th" Oct 31 14:03:43.567334 kubelet[2763]: I1031 14:03:43.567298 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wc8w\" (UniqueName: \"kubernetes.io/projected/ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0-kube-api-access-7wc8w\") pod \"tigera-operator-65cdcdfd6d-lr6th\" (UID: \"ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-lr6th" Oct 31 14:03:43.592024 systemd[1]: Started cri-containerd-a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27.scope - libcontainer container a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27. Oct 31 14:03:43.640130 containerd[1613]: time="2025-10-31T14:03:43.640075818Z" level=info msg="StartContainer for \"a9366ea25de6e8aceeaf43b5f8eb858d87d811467b3ab5953a8bfb0e818f1b27\" returns successfully" Oct 31 14:03:43.786676 containerd[1613]: time="2025-10-31T14:03:43.786626040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lr6th,Uid:ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0,Namespace:tigera-operator,Attempt:0,}" Oct 31 14:03:43.810061 containerd[1613]: time="2025-10-31T14:03:43.809634090Z" level=info msg="connecting to shim 1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5" address="unix:///run/containerd/s/9406f540d1e869a38f3a3057ccb98d2ddae5966a307ac39d1da84c9094027056" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:43.835024 systemd[1]: Started cri-containerd-1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5.scope - libcontainer container 1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5. Oct 31 14:03:43.921740 containerd[1613]: time="2025-10-31T14:03:43.921692818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-lr6th,Uid:ff2a5bbb-73ec-44fe-bc12-fa06750ec8e0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5\"" Oct 31 14:03:43.923669 containerd[1613]: time="2025-10-31T14:03:43.923643060Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 31 14:03:43.976103 kubelet[2763]: E1031 14:03:43.975741 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.976261 kubelet[2763]: E1031 14:03:43.976117 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:43.976516 kubelet[2763]: E1031 14:03:43.976483 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:45.146443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227386217.mount: Deactivated successfully. Oct 31 14:03:45.864744 containerd[1613]: time="2025-10-31T14:03:45.864657153Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:45.865760 containerd[1613]: time="2025-10-31T14:03:45.865404654Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 31 14:03:45.866907 containerd[1613]: time="2025-10-31T14:03:45.866876727Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:45.869541 containerd[1613]: time="2025-10-31T14:03:45.869482594Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:45.870194 containerd[1613]: time="2025-10-31T14:03:45.870149247Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.946478325s" Oct 31 14:03:45.870194 containerd[1613]: time="2025-10-31T14:03:45.870181538Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 31 14:03:45.875657 containerd[1613]: time="2025-10-31T14:03:45.875174080Z" level=info msg="CreateContainer within sandbox \"1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 31 14:03:45.884152 containerd[1613]: time="2025-10-31T14:03:45.884121008Z" level=info msg="Container d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:45.892711 containerd[1613]: time="2025-10-31T14:03:45.892637766Z" level=info msg="CreateContainer within sandbox \"1dfec357407edf53e16e3bdfbaaa440ff3f610d80851aa7ada7635d30d45f7a5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309\"" Oct 31 14:03:45.893393 containerd[1613]: time="2025-10-31T14:03:45.893331649Z" level=info msg="StartContainer for \"d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309\"" Oct 31 14:03:45.894368 containerd[1613]: time="2025-10-31T14:03:45.894343535Z" level=info msg="connecting to shim d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309" address="unix:///run/containerd/s/9406f540d1e869a38f3a3057ccb98d2ddae5966a307ac39d1da84c9094027056" protocol=ttrpc version=3 Oct 31 14:03:45.928254 systemd[1]: Started cri-containerd-d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309.scope - libcontainer container d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309. Oct 31 14:03:45.967701 containerd[1613]: time="2025-10-31T14:03:45.967623787Z" level=info msg="StartContainer for \"d999baa20cbff8d675b315cc39f26a8d604ccb02c46c5d2f9416c020d442c309\" returns successfully" Oct 31 14:03:45.992229 kubelet[2763]: I1031 14:03:45.992163 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p65h9" podStartSLOduration=2.992141109 podStartE2EDuration="2.992141109s" podCreationTimestamp="2025-10-31 14:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:03:43.99545883 +0000 UTC m=+7.148064308" watchObservedRunningTime="2025-10-31 14:03:45.992141109 +0000 UTC m=+9.144746577" Oct 31 14:03:46.142956 kubelet[2763]: E1031 14:03:46.142757 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:46.158418 kubelet[2763]: I1031 14:03:46.158331 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-lr6th" podStartSLOduration=1.210223267 podStartE2EDuration="3.15830727s" podCreationTimestamp="2025-10-31 14:03:43 +0000 UTC" firstStartedPulling="2025-10-31 14:03:43.923260576 +0000 UTC m=+7.075866034" lastFinishedPulling="2025-10-31 14:03:45.871344569 +0000 UTC m=+9.023950037" observedRunningTime="2025-10-31 14:03:45.992841617 +0000 UTC m=+9.145447085" watchObservedRunningTime="2025-10-31 14:03:46.15830727 +0000 UTC m=+9.310912738" Oct 31 14:03:46.983451 kubelet[2763]: E1031 14:03:46.983404 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:47.988462 kubelet[2763]: E1031 14:03:47.988410 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:51.390380 sudo[1832]: pam_unix(sudo:session): session closed for user root Oct 31 14:03:51.394681 sshd[1831]: Connection closed by 10.0.0.1 port 41774 Oct 31 14:03:51.394193 sshd-session[1828]: pam_unix(sshd:session): session closed for user core Oct 31 14:03:51.400984 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:41774.service: Deactivated successfully. Oct 31 14:03:51.406295 systemd[1]: session-7.scope: Deactivated successfully. Oct 31 14:03:51.406540 systemd[1]: session-7.scope: Consumed 6.143s CPU time, 224.2M memory peak. Oct 31 14:03:51.411493 systemd-logind[1600]: Session 7 logged out. Waiting for processes to exit. Oct 31 14:03:51.413130 systemd-logind[1600]: Removed session 7. Oct 31 14:03:51.456800 update_engine[1602]: I20251031 14:03:51.455929 1602 update_attempter.cc:509] Updating boot flags... Oct 31 14:03:55.623612 systemd[1]: Created slice kubepods-besteffort-podb9825d4b_d1d1_4460_a4c9_3d87f828fbcf.slice - libcontainer container kubepods-besteffort-podb9825d4b_d1d1_4460_a4c9_3d87f828fbcf.slice. Oct 31 14:03:55.651806 kubelet[2763]: I1031 14:03:55.651699 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54gc6\" (UniqueName: \"kubernetes.io/projected/b9825d4b-d1d1-4460-a4c9-3d87f828fbcf-kube-api-access-54gc6\") pod \"calico-typha-5cb6664548-zqpbb\" (UID: \"b9825d4b-d1d1-4460-a4c9-3d87f828fbcf\") " pod="calico-system/calico-typha-5cb6664548-zqpbb" Oct 31 14:03:55.651806 kubelet[2763]: I1031 14:03:55.651778 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9825d4b-d1d1-4460-a4c9-3d87f828fbcf-tigera-ca-bundle\") pod \"calico-typha-5cb6664548-zqpbb\" (UID: \"b9825d4b-d1d1-4460-a4c9-3d87f828fbcf\") " pod="calico-system/calico-typha-5cb6664548-zqpbb" Oct 31 14:03:55.651806 kubelet[2763]: I1031 14:03:55.651803 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b9825d4b-d1d1-4460-a4c9-3d87f828fbcf-typha-certs\") pod \"calico-typha-5cb6664548-zqpbb\" (UID: \"b9825d4b-d1d1-4460-a4c9-3d87f828fbcf\") " pod="calico-system/calico-typha-5cb6664548-zqpbb" Oct 31 14:03:55.926802 systemd[1]: Created slice kubepods-besteffort-pod8eebfe46_ba21_446c_9b46_6a5d8c122bde.slice - libcontainer container kubepods-besteffort-pod8eebfe46_ba21_446c_9b46_6a5d8c122bde.slice. Oct 31 14:03:55.932432 kubelet[2763]: E1031 14:03:55.931874 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:55.932803 containerd[1613]: time="2025-10-31T14:03:55.932766452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb6664548-zqpbb,Uid:b9825d4b-d1d1-4460-a4c9-3d87f828fbcf,Namespace:calico-system,Attempt:0,}" Oct 31 14:03:55.954292 kubelet[2763]: I1031 14:03:55.954043 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-cni-net-dir\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954292 kubelet[2763]: I1031 14:03:55.954114 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-policysync\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954292 kubelet[2763]: I1031 14:03:55.954133 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-var-run-calico\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954292 kubelet[2763]: I1031 14:03:55.954164 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcgrb\" (UniqueName: \"kubernetes.io/projected/8eebfe46-ba21-446c-9b46-6a5d8c122bde-kube-api-access-tcgrb\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954292 kubelet[2763]: I1031 14:03:55.954190 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8eebfe46-ba21-446c-9b46-6a5d8c122bde-tigera-ca-bundle\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954523 kubelet[2763]: I1031 14:03:55.954212 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-cni-log-dir\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954523 kubelet[2763]: I1031 14:03:55.954245 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-lib-modules\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954523 kubelet[2763]: I1031 14:03:55.954284 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-var-lib-calico\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954523 kubelet[2763]: I1031 14:03:55.954313 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-xtables-lock\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954523 kubelet[2763]: I1031 14:03:55.954339 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-flexvol-driver-host\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954636 kubelet[2763]: I1031 14:03:55.954359 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8eebfe46-ba21-446c-9b46-6a5d8c122bde-node-certs\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.954636 kubelet[2763]: I1031 14:03:55.954376 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8eebfe46-ba21-446c-9b46-6a5d8c122bde-cni-bin-dir\") pod \"calico-node-vz57s\" (UID: \"8eebfe46-ba21-446c-9b46-6a5d8c122bde\") " pod="calico-system/calico-node-vz57s" Oct 31 14:03:55.980683 containerd[1613]: time="2025-10-31T14:03:55.980599381Z" level=info msg="connecting to shim db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb" address="unix:///run/containerd/s/c937e0eeb5a5ff5db4fcb65982867945424f46cc5a8a76b72586bf2537c074aa" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:56.011070 systemd[1]: Started cri-containerd-db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb.scope - libcontainer container db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb. Oct 31 14:03:56.049904 kubelet[2763]: E1031 14:03:56.049666 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:03:56.064873 kubelet[2763]: E1031 14:03:56.061994 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.064873 kubelet[2763]: W1031 14:03:56.062029 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.064873 kubelet[2763]: E1031 14:03:56.062069 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.071433 kubelet[2763]: E1031 14:03:56.071387 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.071433 kubelet[2763]: W1031 14:03:56.071416 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.071596 kubelet[2763]: E1031 14:03:56.071442 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.074163 kubelet[2763]: E1031 14:03:56.074131 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.074163 kubelet[2763]: W1031 14:03:56.074155 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.074296 kubelet[2763]: E1031 14:03:56.074176 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.089024 containerd[1613]: time="2025-10-31T14:03:56.088977134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5cb6664548-zqpbb,Uid:b9825d4b-d1d1-4460-a4c9-3d87f828fbcf,Namespace:calico-system,Attempt:0,} returns sandbox id \"db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb\"" Oct 31 14:03:56.089972 kubelet[2763]: E1031 14:03:56.089898 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:56.091031 containerd[1613]: time="2025-10-31T14:03:56.090988761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 31 14:03:56.132946 kubelet[2763]: E1031 14:03:56.132898 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.132946 kubelet[2763]: W1031 14:03:56.132929 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.132946 kubelet[2763]: E1031 14:03:56.132955 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.133230 kubelet[2763]: E1031 14:03:56.133172 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.133230 kubelet[2763]: W1031 14:03:56.133185 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.133230 kubelet[2763]: E1031 14:03:56.133199 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133385 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134084 kubelet[2763]: W1031 14:03:56.133396 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133405 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133605 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134084 kubelet[2763]: W1031 14:03:56.133615 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133624 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133807 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134084 kubelet[2763]: W1031 14:03:56.133818 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.133826 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134084 kubelet[2763]: E1031 14:03:56.134059 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134349 kubelet[2763]: W1031 14:03:56.134069 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134349 kubelet[2763]: E1031 14:03:56.134079 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134349 kubelet[2763]: E1031 14:03:56.134243 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134349 kubelet[2763]: W1031 14:03:56.134250 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134349 kubelet[2763]: E1031 14:03:56.134271 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.134457 kubelet[2763]: E1031 14:03:56.134441 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.134457 kubelet[2763]: W1031 14:03:56.134448 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.134457 kubelet[2763]: E1031 14:03:56.134456 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.135140 kubelet[2763]: E1031 14:03:56.134665 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.135140 kubelet[2763]: W1031 14:03:56.134679 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.135140 kubelet[2763]: E1031 14:03:56.134692 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.135140 kubelet[2763]: E1031 14:03:56.134912 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.135140 kubelet[2763]: W1031 14:03:56.134923 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.135140 kubelet[2763]: E1031 14:03:56.134934 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.136159 kubelet[2763]: E1031 14:03:56.136096 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.136159 kubelet[2763]: W1031 14:03:56.136111 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.136159 kubelet[2763]: E1031 14:03:56.136121 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.136608 kubelet[2763]: E1031 14:03:56.136445 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.136608 kubelet[2763]: W1031 14:03:56.136455 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.136608 kubelet[2763]: E1031 14:03:56.136467 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.136710 kubelet[2763]: E1031 14:03:56.136662 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.136710 kubelet[2763]: W1031 14:03:56.136669 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.136710 kubelet[2763]: E1031 14:03:56.136678 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.136990 kubelet[2763]: E1031 14:03:56.136967 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.136990 kubelet[2763]: W1031 14:03:56.136981 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.136990 kubelet[2763]: E1031 14:03:56.136991 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.137188 kubelet[2763]: E1031 14:03:56.137169 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.137188 kubelet[2763]: W1031 14:03:56.137181 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.137188 kubelet[2763]: E1031 14:03:56.137190 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.137380 kubelet[2763]: E1031 14:03:56.137363 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.137380 kubelet[2763]: W1031 14:03:56.137375 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.137497 kubelet[2763]: E1031 14:03:56.137385 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139338 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.139897 kubelet[2763]: W1031 14:03:56.139360 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139375 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139555 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.139897 kubelet[2763]: W1031 14:03:56.139564 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139573 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139735 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.139897 kubelet[2763]: W1031 14:03:56.139742 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.139897 kubelet[2763]: E1031 14:03:56.139754 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.140154 kubelet[2763]: E1031 14:03:56.139951 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.140154 kubelet[2763]: W1031 14:03:56.139959 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.140154 kubelet[2763]: E1031 14:03:56.139973 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.155875 kubelet[2763]: E1031 14:03:56.155813 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.155875 kubelet[2763]: W1031 14:03:56.155843 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.155875 kubelet[2763]: E1031 14:03:56.155879 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.156088 kubelet[2763]: I1031 14:03:56.155906 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d72fcf62-30d2-4a4d-9feb-16a72bc97e14-kubelet-dir\") pod \"csi-node-driver-rpkmr\" (UID: \"d72fcf62-30d2-4a4d-9feb-16a72bc97e14\") " pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:03:56.156126 kubelet[2763]: E1031 14:03:56.156114 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.156162 kubelet[2763]: W1031 14:03:56.156124 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.156162 kubelet[2763]: E1031 14:03:56.156152 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.156205 kubelet[2763]: I1031 14:03:56.156178 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d72fcf62-30d2-4a4d-9feb-16a72bc97e14-socket-dir\") pod \"csi-node-driver-rpkmr\" (UID: \"d72fcf62-30d2-4a4d-9feb-16a72bc97e14\") " pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:03:56.156894 kubelet[2763]: E1031 14:03:56.156456 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.156894 kubelet[2763]: W1031 14:03:56.156470 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.156894 kubelet[2763]: E1031 14:03:56.156480 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.156894 kubelet[2763]: I1031 14:03:56.156505 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d72fcf62-30d2-4a4d-9feb-16a72bc97e14-varrun\") pod \"csi-node-driver-rpkmr\" (UID: \"d72fcf62-30d2-4a4d-9feb-16a72bc97e14\") " pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:03:56.156894 kubelet[2763]: E1031 14:03:56.156721 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.156894 kubelet[2763]: W1031 14:03:56.156730 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.156894 kubelet[2763]: E1031 14:03:56.156739 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.156894 kubelet[2763]: I1031 14:03:56.156766 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8bpp\" (UniqueName: \"kubernetes.io/projected/d72fcf62-30d2-4a4d-9feb-16a72bc97e14-kube-api-access-s8bpp\") pod \"csi-node-driver-rpkmr\" (UID: \"d72fcf62-30d2-4a4d-9feb-16a72bc97e14\") " pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:03:56.157099 kubelet[2763]: E1031 14:03:56.157072 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.157099 kubelet[2763]: W1031 14:03:56.157081 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.157099 kubelet[2763]: E1031 14:03:56.157090 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.157166 kubelet[2763]: I1031 14:03:56.157118 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d72fcf62-30d2-4a4d-9feb-16a72bc97e14-registration-dir\") pod \"csi-node-driver-rpkmr\" (UID: \"d72fcf62-30d2-4a4d-9feb-16a72bc97e14\") " pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:03:56.157372 kubelet[2763]: E1031 14:03:56.157353 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.157372 kubelet[2763]: W1031 14:03:56.157368 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.157503 kubelet[2763]: E1031 14:03:56.157378 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159026 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.159872 kubelet[2763]: W1031 14:03:56.159041 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159051 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159244 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.159872 kubelet[2763]: W1031 14:03:56.159252 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159270 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159491 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.159872 kubelet[2763]: W1031 14:03:56.159499 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159508 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.159872 kubelet[2763]: E1031 14:03:56.159711 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.160195 kubelet[2763]: W1031 14:03:56.159719 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.160195 kubelet[2763]: E1031 14:03:56.159728 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.160195 kubelet[2763]: E1031 14:03:56.159962 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.160195 kubelet[2763]: W1031 14:03:56.159970 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.160195 kubelet[2763]: E1031 14:03:56.159979 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.160389 kubelet[2763]: E1031 14:03:56.160371 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.160389 kubelet[2763]: W1031 14:03:56.160383 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.160445 kubelet[2763]: E1031 14:03:56.160392 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.161909 kubelet[2763]: E1031 14:03:56.161887 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.161909 kubelet[2763]: W1031 14:03:56.161904 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.161909 kubelet[2763]: E1031 14:03:56.161914 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.162160 kubelet[2763]: E1031 14:03:56.162144 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.162160 kubelet[2763]: W1031 14:03:56.162156 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.162224 kubelet[2763]: E1031 14:03:56.162165 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.162412 kubelet[2763]: E1031 14:03:56.162395 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.162412 kubelet[2763]: W1031 14:03:56.162408 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.162472 kubelet[2763]: E1031 14:03:56.162417 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.234599 kubelet[2763]: E1031 14:03:56.234465 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:56.234966 containerd[1613]: time="2025-10-31T14:03:56.234925164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vz57s,Uid:8eebfe46-ba21-446c-9b46-6a5d8c122bde,Namespace:calico-system,Attempt:0,}" Oct 31 14:03:56.257883 kubelet[2763]: E1031 14:03:56.257822 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.257883 kubelet[2763]: W1031 14:03:56.257859 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.257883 kubelet[2763]: E1031 14:03:56.257894 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.258135 kubelet[2763]: E1031 14:03:56.258079 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.258135 kubelet[2763]: W1031 14:03:56.258087 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.258135 kubelet[2763]: E1031 14:03:56.258096 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.258380 kubelet[2763]: E1031 14:03:56.258361 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.258380 kubelet[2763]: W1031 14:03:56.258378 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.258447 kubelet[2763]: E1031 14:03:56.258391 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.258646 kubelet[2763]: E1031 14:03:56.258627 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.258646 kubelet[2763]: W1031 14:03:56.258643 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.258725 kubelet[2763]: E1031 14:03:56.258656 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.258990 kubelet[2763]: E1031 14:03:56.258970 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.258990 kubelet[2763]: W1031 14:03:56.258990 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.259062 kubelet[2763]: E1031 14:03:56.259004 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.259275 kubelet[2763]: E1031 14:03:56.259255 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.259275 kubelet[2763]: W1031 14:03:56.259269 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.259275 kubelet[2763]: E1031 14:03:56.259290 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.259602 kubelet[2763]: E1031 14:03:56.259550 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.259602 kubelet[2763]: W1031 14:03:56.259573 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.259602 kubelet[2763]: E1031 14:03:56.259585 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.259866 kubelet[2763]: E1031 14:03:56.259830 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.259866 kubelet[2763]: W1031 14:03:56.259865 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.259945 kubelet[2763]: E1031 14:03:56.259879 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.260151 kubelet[2763]: E1031 14:03:56.260098 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.260151 kubelet[2763]: W1031 14:03:56.260110 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.260151 kubelet[2763]: E1031 14:03:56.260121 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.260428 kubelet[2763]: E1031 14:03:56.260411 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.260428 kubelet[2763]: W1031 14:03:56.260425 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.260643 kubelet[2763]: E1031 14:03:56.260436 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.260678 kubelet[2763]: E1031 14:03:56.260659 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.260678 kubelet[2763]: W1031 14:03:56.260668 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.260758 kubelet[2763]: E1031 14:03:56.260678 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.261108 kubelet[2763]: E1031 14:03:56.260926 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.261108 kubelet[2763]: W1031 14:03:56.260951 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.261108 kubelet[2763]: E1031 14:03:56.260964 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.261260 containerd[1613]: time="2025-10-31T14:03:56.261220004Z" level=info msg="connecting to shim 90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e" address="unix:///run/containerd/s/0bc3b46a36af60f98607000854eb8c31a066c9c01039bcdd332644834b02273e" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:03:56.261363 kubelet[2763]: E1031 14:03:56.261345 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.261363 kubelet[2763]: W1031 14:03:56.261359 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.261446 kubelet[2763]: E1031 14:03:56.261372 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.261596 kubelet[2763]: E1031 14:03:56.261579 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.261596 kubelet[2763]: W1031 14:03:56.261591 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.261696 kubelet[2763]: E1031 14:03:56.261601 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.261798 kubelet[2763]: E1031 14:03:56.261781 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.261798 kubelet[2763]: W1031 14:03:56.261794 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.261986 kubelet[2763]: E1031 14:03:56.261807 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.262079 kubelet[2763]: E1031 14:03:56.262044 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.262079 kubelet[2763]: W1031 14:03:56.262076 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.262177 kubelet[2763]: E1031 14:03:56.262089 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.262327 kubelet[2763]: E1031 14:03:56.262311 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.262327 kubelet[2763]: W1031 14:03:56.262325 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.262439 kubelet[2763]: E1031 14:03:56.262335 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.262580 kubelet[2763]: E1031 14:03:56.262549 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.262580 kubelet[2763]: W1031 14:03:56.262574 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.262661 kubelet[2763]: E1031 14:03:56.262586 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.262821 kubelet[2763]: E1031 14:03:56.262790 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.262821 kubelet[2763]: W1031 14:03:56.262802 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.262821 kubelet[2763]: E1031 14:03:56.262812 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.263112 kubelet[2763]: E1031 14:03:56.263075 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.263112 kubelet[2763]: W1031 14:03:56.263108 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.263185 kubelet[2763]: E1031 14:03:56.263120 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.263376 kubelet[2763]: E1031 14:03:56.263359 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.263376 kubelet[2763]: W1031 14:03:56.263373 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.263455 kubelet[2763]: E1031 14:03:56.263384 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.263665 kubelet[2763]: E1031 14:03:56.263649 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.263665 kubelet[2763]: W1031 14:03:56.263662 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.263738 kubelet[2763]: E1031 14:03:56.263672 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.263978 kubelet[2763]: E1031 14:03:56.263958 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.263978 kubelet[2763]: W1031 14:03:56.263972 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.264219 kubelet[2763]: E1031 14:03:56.263983 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.264219 kubelet[2763]: E1031 14:03:56.264202 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.264219 kubelet[2763]: W1031 14:03:56.264213 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.264498 kubelet[2763]: E1031 14:03:56.264224 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.264498 kubelet[2763]: E1031 14:03:56.264468 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.264498 kubelet[2763]: W1031 14:03:56.264479 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.264498 kubelet[2763]: E1031 14:03:56.264490 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.275022 kubelet[2763]: E1031 14:03:56.274974 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:56.275173 kubelet[2763]: W1031 14:03:56.275048 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:56.275173 kubelet[2763]: E1031 14:03:56.275077 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:56.290228 systemd[1]: Started cri-containerd-90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e.scope - libcontainer container 90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e. Oct 31 14:03:56.321321 containerd[1613]: time="2025-10-31T14:03:56.321260731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vz57s,Uid:8eebfe46-ba21-446c-9b46-6a5d8c122bde,Namespace:calico-system,Attempt:0,} returns sandbox id \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\"" Oct 31 14:03:56.322195 kubelet[2763]: E1031 14:03:56.322159 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:57.421907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2496129026.mount: Deactivated successfully. Oct 31 14:03:57.945653 kubelet[2763]: E1031 14:03:57.945565 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:03:57.971614 containerd[1613]: time="2025-10-31T14:03:57.971525170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:57.972352 containerd[1613]: time="2025-10-31T14:03:57.972302234Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 31 14:03:57.973595 containerd[1613]: time="2025-10-31T14:03:57.973555111Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:57.977187 containerd[1613]: time="2025-10-31T14:03:57.977148258Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.886113401s" Oct 31 14:03:57.977187 containerd[1613]: time="2025-10-31T14:03:57.977184756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 31 14:03:57.977486 containerd[1613]: time="2025-10-31T14:03:57.977451066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:57.980260 containerd[1613]: time="2025-10-31T14:03:57.980243149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 31 14:03:58.000411 containerd[1613]: time="2025-10-31T14:03:58.000365400Z" level=info msg="CreateContainer within sandbox \"db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 31 14:03:58.007616 containerd[1613]: time="2025-10-31T14:03:58.007585548Z" level=info msg="Container 9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:58.015797 containerd[1613]: time="2025-10-31T14:03:58.015736998Z" level=info msg="CreateContainer within sandbox \"db3a9460f5ed6cce385d00efffd61dbabd26a52e11d084abd901716faf930eeb\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086\"" Oct 31 14:03:58.016247 containerd[1613]: time="2025-10-31T14:03:58.016205801Z" level=info msg="StartContainer for \"9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086\"" Oct 31 14:03:58.017696 containerd[1613]: time="2025-10-31T14:03:58.017660822Z" level=info msg="connecting to shim 9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086" address="unix:///run/containerd/s/c937e0eeb5a5ff5db4fcb65982867945424f46cc5a8a76b72586bf2537c074aa" protocol=ttrpc version=3 Oct 31 14:03:58.046074 systemd[1]: Started cri-containerd-9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086.scope - libcontainer container 9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086. Oct 31 14:03:58.099502 containerd[1613]: time="2025-10-31T14:03:58.099437950Z" level=info msg="StartContainer for \"9df6b09e1f51f89187ccf9e765d4abd809b46d77444c7163e4bd768f7622d086\" returns successfully" Oct 31 14:03:59.017911 kubelet[2763]: E1031 14:03:59.017833 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:03:59.059668 kubelet[2763]: E1031 14:03:59.059622 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.059668 kubelet[2763]: W1031 14:03:59.059648 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.059668 kubelet[2763]: E1031 14:03:59.059675 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.060369 kubelet[2763]: E1031 14:03:59.059831 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.060369 kubelet[2763]: W1031 14:03:59.059838 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.060369 kubelet[2763]: E1031 14:03:59.059870 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.060369 kubelet[2763]: E1031 14:03:59.060117 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.060369 kubelet[2763]: W1031 14:03:59.060125 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.060369 kubelet[2763]: E1031 14:03:59.060134 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.060592 kubelet[2763]: E1031 14:03:59.060395 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.060592 kubelet[2763]: W1031 14:03:59.060404 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.060592 kubelet[2763]: E1031 14:03:59.060412 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.060703 kubelet[2763]: E1031 14:03:59.060632 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.060703 kubelet[2763]: W1031 14:03:59.060640 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.060703 kubelet[2763]: E1031 14:03:59.060648 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.060946 kubelet[2763]: E1031 14:03:59.060927 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.060946 kubelet[2763]: W1031 14:03:59.060938 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.060946 kubelet[2763]: E1031 14:03:59.060948 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.061153 kubelet[2763]: E1031 14:03:59.061134 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.061153 kubelet[2763]: W1031 14:03:59.061145 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.061153 kubelet[2763]: E1031 14:03:59.061152 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.061337 kubelet[2763]: E1031 14:03:59.061320 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.061337 kubelet[2763]: W1031 14:03:59.061332 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.061337 kubelet[2763]: E1031 14:03:59.061340 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.061616 kubelet[2763]: E1031 14:03:59.061579 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.061616 kubelet[2763]: W1031 14:03:59.061607 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.061706 kubelet[2763]: E1031 14:03:59.061635 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.061962 kubelet[2763]: E1031 14:03:59.061944 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.061962 kubelet[2763]: W1031 14:03:59.061957 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.062033 kubelet[2763]: E1031 14:03:59.061969 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.062196 kubelet[2763]: E1031 14:03:59.062178 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.062196 kubelet[2763]: W1031 14:03:59.062193 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.062286 kubelet[2763]: E1031 14:03:59.062207 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.062436 kubelet[2763]: E1031 14:03:59.062423 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.062459 kubelet[2763]: W1031 14:03:59.062434 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.062459 kubelet[2763]: E1031 14:03:59.062446 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.062679 kubelet[2763]: E1031 14:03:59.062664 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.062712 kubelet[2763]: W1031 14:03:59.062678 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.062712 kubelet[2763]: E1031 14:03:59.062689 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.062925 kubelet[2763]: E1031 14:03:59.062911 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.062925 kubelet[2763]: W1031 14:03:59.062923 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.062996 kubelet[2763]: E1031 14:03:59.062933 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.063143 kubelet[2763]: E1031 14:03:59.063129 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.063176 kubelet[2763]: W1031 14:03:59.063142 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.063176 kubelet[2763]: E1031 14:03:59.063154 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.083785 kubelet[2763]: E1031 14:03:59.083732 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.083785 kubelet[2763]: W1031 14:03:59.083767 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.083917 kubelet[2763]: E1031 14:03:59.083794 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.084104 kubelet[2763]: E1031 14:03:59.084072 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.084104 kubelet[2763]: W1031 14:03:59.084089 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.084104 kubelet[2763]: E1031 14:03:59.084099 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.084527 kubelet[2763]: E1031 14:03:59.084446 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.084527 kubelet[2763]: W1031 14:03:59.084514 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.084627 kubelet[2763]: E1031 14:03:59.084537 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.084787 kubelet[2763]: E1031 14:03:59.084767 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.084787 kubelet[2763]: W1031 14:03:59.084782 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.084913 kubelet[2763]: E1031 14:03:59.084794 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.085086 kubelet[2763]: E1031 14:03:59.085054 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.085086 kubelet[2763]: W1031 14:03:59.085068 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.085086 kubelet[2763]: E1031 14:03:59.085080 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.085365 kubelet[2763]: E1031 14:03:59.085325 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.085365 kubelet[2763]: W1031 14:03:59.085341 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.085365 kubelet[2763]: E1031 14:03:59.085354 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.085618 kubelet[2763]: E1031 14:03:59.085597 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.085618 kubelet[2763]: W1031 14:03:59.085610 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.085618 kubelet[2763]: E1031 14:03:59.085621 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.085914 kubelet[2763]: E1031 14:03:59.085892 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.085914 kubelet[2763]: W1031 14:03:59.085905 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.085987 kubelet[2763]: E1031 14:03:59.085917 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.086144 kubelet[2763]: E1031 14:03:59.086127 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.086144 kubelet[2763]: W1031 14:03:59.086140 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.086201 kubelet[2763]: E1031 14:03:59.086150 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.086377 kubelet[2763]: E1031 14:03:59.086360 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.086377 kubelet[2763]: W1031 14:03:59.086372 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.086436 kubelet[2763]: E1031 14:03:59.086387 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.086619 kubelet[2763]: E1031 14:03:59.086602 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.086619 kubelet[2763]: W1031 14:03:59.086615 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.086664 kubelet[2763]: E1031 14:03:59.086626 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.086997 kubelet[2763]: E1031 14:03:59.086967 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.086997 kubelet[2763]: W1031 14:03:59.086985 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.087057 kubelet[2763]: E1031 14:03:59.086999 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.087266 kubelet[2763]: E1031 14:03:59.087244 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.087266 kubelet[2763]: W1031 14:03:59.087259 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.087318 kubelet[2763]: E1031 14:03:59.087273 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.087494 kubelet[2763]: E1031 14:03:59.087477 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.087494 kubelet[2763]: W1031 14:03:59.087490 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.087541 kubelet[2763]: E1031 14:03:59.087501 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.087712 kubelet[2763]: E1031 14:03:59.087696 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.087712 kubelet[2763]: W1031 14:03:59.087708 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.087755 kubelet[2763]: E1031 14:03:59.087718 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.087998 kubelet[2763]: E1031 14:03:59.087972 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.087998 kubelet[2763]: W1031 14:03:59.087985 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.087998 kubelet[2763]: E1031 14:03:59.087996 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.088362 kubelet[2763]: E1031 14:03:59.088331 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.088362 kubelet[2763]: W1031 14:03:59.088356 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.088417 kubelet[2763]: E1031 14:03:59.088376 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.088618 kubelet[2763]: E1031 14:03:59.088599 2763 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 31 14:03:59.088618 kubelet[2763]: W1031 14:03:59.088613 2763 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 31 14:03:59.088678 kubelet[2763]: E1031 14:03:59.088622 2763 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 31 14:03:59.129312 kubelet[2763]: I1031 14:03:59.128435 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5cb6664548-zqpbb" podStartSLOduration=2.24038706 podStartE2EDuration="4.128412607s" podCreationTimestamp="2025-10-31 14:03:55 +0000 UTC" firstStartedPulling="2025-10-31 14:03:56.090507114 +0000 UTC m=+19.243112582" lastFinishedPulling="2025-10-31 14:03:57.978532661 +0000 UTC m=+21.131138129" observedRunningTime="2025-10-31 14:03:59.128001618 +0000 UTC m=+22.280607116" watchObservedRunningTime="2025-10-31 14:03:59.128412607 +0000 UTC m=+22.281018075" Oct 31 14:03:59.247549 containerd[1613]: time="2025-10-31T14:03:59.247497620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:59.248558 containerd[1613]: time="2025-10-31T14:03:59.248330801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 31 14:03:59.249570 containerd[1613]: time="2025-10-31T14:03:59.249525507Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:59.252014 containerd[1613]: time="2025-10-31T14:03:59.251961037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:03:59.252467 containerd[1613]: time="2025-10-31T14:03:59.252428484Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.272089563s" Oct 31 14:03:59.252467 containerd[1613]: time="2025-10-31T14:03:59.252458496Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 31 14:03:59.256577 containerd[1613]: time="2025-10-31T14:03:59.256529744Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 31 14:03:59.265337 containerd[1613]: time="2025-10-31T14:03:59.265305291Z" level=info msg="Container 476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:03:59.274510 containerd[1613]: time="2025-10-31T14:03:59.274416661Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\"" Oct 31 14:03:59.275012 containerd[1613]: time="2025-10-31T14:03:59.274958653Z" level=info msg="StartContainer for \"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\"" Oct 31 14:03:59.276526 containerd[1613]: time="2025-10-31T14:03:59.276500275Z" level=info msg="connecting to shim 476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631" address="unix:///run/containerd/s/0bc3b46a36af60f98607000854eb8c31a066c9c01039bcdd332644834b02273e" protocol=ttrpc version=3 Oct 31 14:03:59.301045 systemd[1]: Started cri-containerd-476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631.scope - libcontainer container 476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631. Oct 31 14:03:59.356311 containerd[1613]: time="2025-10-31T14:03:59.356245121Z" level=info msg="StartContainer for \"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\" returns successfully" Oct 31 14:03:59.371301 systemd[1]: cri-containerd-476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631.scope: Deactivated successfully. Oct 31 14:03:59.373583 containerd[1613]: time="2025-10-31T14:03:59.373543969Z" level=info msg="received exit event container_id:\"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\" id:\"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\" pid:3464 exited_at:{seconds:1761919439 nanos:373133111}" Oct 31 14:03:59.373649 containerd[1613]: time="2025-10-31T14:03:59.373629357Z" level=info msg="TaskExit event in podsandbox handler container_id:\"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\" id:\"476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631\" pid:3464 exited_at:{seconds:1761919439 nanos:373133111}" Oct 31 14:03:59.404730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-476ce4dadd638181418fa899717a58df38b1bee5a64ead5da73ee91c65d1d631-rootfs.mount: Deactivated successfully. Oct 31 14:03:59.945619 kubelet[2763]: E1031 14:03:59.945562 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:00.021043 kubelet[2763]: I1031 14:04:00.020995 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 14:04:00.021587 kubelet[2763]: E1031 14:04:00.021278 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:00.021958 containerd[1613]: time="2025-10-31T14:04:00.021919077Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 31 14:04:00.022608 kubelet[2763]: E1031 14:04:00.022584 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:01.946538 kubelet[2763]: E1031 14:04:01.946114 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:02.894352 containerd[1613]: time="2025-10-31T14:04:02.894285556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:02.895493 containerd[1613]: time="2025-10-31T14:04:02.895449986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 31 14:04:02.896655 containerd[1613]: time="2025-10-31T14:04:02.896610618Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:02.898586 containerd[1613]: time="2025-10-31T14:04:02.898536045Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:02.899144 containerd[1613]: time="2025-10-31T14:04:02.899050380Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.877094947s" Oct 31 14:04:02.899144 containerd[1613]: time="2025-10-31T14:04:02.899080471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 31 14:04:02.903323 containerd[1613]: time="2025-10-31T14:04:02.903284916Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 31 14:04:02.912488 containerd[1613]: time="2025-10-31T14:04:02.912426189Z" level=info msg="Container 5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:04:02.921867 containerd[1613]: time="2025-10-31T14:04:02.921803031Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\"" Oct 31 14:04:02.922411 containerd[1613]: time="2025-10-31T14:04:02.922375687Z" level=info msg="StartContainer for \"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\"" Oct 31 14:04:02.924192 containerd[1613]: time="2025-10-31T14:04:02.924127223Z" level=info msg="connecting to shim 5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817" address="unix:///run/containerd/s/0bc3b46a36af60f98607000854eb8c31a066c9c01039bcdd332644834b02273e" protocol=ttrpc version=3 Oct 31 14:04:02.956118 systemd[1]: Started cri-containerd-5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817.scope - libcontainer container 5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817. Oct 31 14:04:03.003766 containerd[1613]: time="2025-10-31T14:04:03.003709449Z" level=info msg="StartContainer for \"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\" returns successfully" Oct 31 14:04:03.040333 kubelet[2763]: E1031 14:04:03.040265 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:03.612750 kubelet[2763]: I1031 14:04:03.612694 2763 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 31 14:04:03.614539 kubelet[2763]: E1031 14:04:03.614511 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:03.946141 kubelet[2763]: E1031 14:04:03.945929 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:04.059394 kubelet[2763]: E1031 14:04:04.058126 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:04.059394 kubelet[2763]: E1031 14:04:04.058365 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:04.204064 systemd[1]: cri-containerd-5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817.scope: Deactivated successfully. Oct 31 14:04:04.204900 systemd[1]: cri-containerd-5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817.scope: Consumed 674ms CPU time, 176M memory peak, 3.4M read from disk, 171.3M written to disk. Oct 31 14:04:04.205335 containerd[1613]: time="2025-10-31T14:04:04.205259983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\" id:\"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\" pid:3525 exited_at:{seconds:1761919444 nanos:204837513}" Oct 31 14:04:04.205612 containerd[1613]: time="2025-10-31T14:04:04.205366692Z" level=info msg="received exit event container_id:\"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\" id:\"5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817\" pid:3525 exited_at:{seconds:1761919444 nanos:204837513}" Oct 31 14:04:04.230098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a360d60ff967b7b98f1f0d21135398d99b9089c77cee79a2b4774c1633ab817-rootfs.mount: Deactivated successfully. Oct 31 14:04:04.266667 kubelet[2763]: I1031 14:04:04.266621 2763 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 31 14:04:04.299018 systemd[1]: Created slice kubepods-burstable-podf16f1057_636c_439c_9e25_76999ec5fe91.slice - libcontainer container kubepods-burstable-podf16f1057_636c_439c_9e25_76999ec5fe91.slice. Oct 31 14:04:04.308614 systemd[1]: Created slice kubepods-besteffort-pod831c9a9e_d727_4252_8c51_c27a6cbc929f.slice - libcontainer container kubepods-besteffort-pod831c9a9e_d727_4252_8c51_c27a6cbc929f.slice. Oct 31 14:04:04.315607 systemd[1]: Created slice kubepods-burstable-pod77523b44_946d_4f16_81ec_a47f0ef59d93.slice - libcontainer container kubepods-burstable-pod77523b44_946d_4f16_81ec_a47f0ef59d93.slice. Oct 31 14:04:04.321192 systemd[1]: Created slice kubepods-besteffort-pod310217e9_5570_4d9f_976f_99e2e93d2643.slice - libcontainer container kubepods-besteffort-pod310217e9_5570_4d9f_976f_99e2e93d2643.slice. Oct 31 14:04:04.322406 kubelet[2763]: I1031 14:04:04.321826 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-backend-key-pair\") pod \"whisker-847f7fcf89-7d45k\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " pod="calico-system/whisker-847f7fcf89-7d45k" Oct 31 14:04:04.323677 kubelet[2763]: I1031 14:04:04.323642 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-ca-bundle\") pod \"whisker-847f7fcf89-7d45k\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " pod="calico-system/whisker-847f7fcf89-7d45k" Oct 31 14:04:04.323677 kubelet[2763]: I1031 14:04:04.323677 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt9bq\" (UniqueName: \"kubernetes.io/projected/215ef2ac-986a-4fe6-a85d-c11759880d37-kube-api-access-rt9bq\") pod \"whisker-847f7fcf89-7d45k\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " pod="calico-system/whisker-847f7fcf89-7d45k" Oct 31 14:04:04.323973 kubelet[2763]: I1031 14:04:04.323726 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/831c9a9e-d727-4252-8c51-c27a6cbc929f-calico-apiserver-certs\") pod \"calico-apiserver-7694dfd98b-9cd4s\" (UID: \"831c9a9e-d727-4252-8c51-c27a6cbc929f\") " pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" Oct 31 14:04:04.324064 kubelet[2763]: I1031 14:04:04.324032 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnhdj\" (UniqueName: \"kubernetes.io/projected/77523b44-946d-4f16-81ec-a47f0ef59d93-kube-api-access-wnhdj\") pod \"coredns-66bc5c9577-tftsn\" (UID: \"77523b44-946d-4f16-81ec-a47f0ef59d93\") " pod="kube-system/coredns-66bc5c9577-tftsn" Oct 31 14:04:04.324163 kubelet[2763]: I1031 14:04:04.324086 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/77523b44-946d-4f16-81ec-a47f0ef59d93-config-volume\") pod \"coredns-66bc5c9577-tftsn\" (UID: \"77523b44-946d-4f16-81ec-a47f0ef59d93\") " pod="kube-system/coredns-66bc5c9577-tftsn" Oct 31 14:04:04.324163 kubelet[2763]: I1031 14:04:04.324114 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drbc8\" (UniqueName: \"kubernetes.io/projected/f16f1057-636c-439c-9e25-76999ec5fe91-kube-api-access-drbc8\") pod \"coredns-66bc5c9577-thzch\" (UID: \"f16f1057-636c-439c-9e25-76999ec5fe91\") " pod="kube-system/coredns-66bc5c9577-thzch" Oct 31 14:04:04.324163 kubelet[2763]: I1031 14:04:04.324138 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f16f1057-636c-439c-9e25-76999ec5fe91-config-volume\") pod \"coredns-66bc5c9577-thzch\" (UID: \"f16f1057-636c-439c-9e25-76999ec5fe91\") " pod="kube-system/coredns-66bc5c9577-thzch" Oct 31 14:04:04.324163 kubelet[2763]: I1031 14:04:04.324155 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/310217e9-5570-4d9f-976f-99e2e93d2643-tigera-ca-bundle\") pod \"calico-kube-controllers-75d9bc8644-z9ssv\" (UID: \"310217e9-5570-4d9f-976f-99e2e93d2643\") " pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" Oct 31 14:04:04.324272 kubelet[2763]: I1031 14:04:04.324182 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84cf2\" (UniqueName: \"kubernetes.io/projected/310217e9-5570-4d9f-976f-99e2e93d2643-kube-api-access-84cf2\") pod \"calico-kube-controllers-75d9bc8644-z9ssv\" (UID: \"310217e9-5570-4d9f-976f-99e2e93d2643\") " pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" Oct 31 14:04:04.324272 kubelet[2763]: I1031 14:04:04.324201 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqrb4\" (UniqueName: \"kubernetes.io/projected/831c9a9e-d727-4252-8c51-c27a6cbc929f-kube-api-access-pqrb4\") pod \"calico-apiserver-7694dfd98b-9cd4s\" (UID: \"831c9a9e-d727-4252-8c51-c27a6cbc929f\") " pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" Oct 31 14:04:04.327692 systemd[1]: Created slice kubepods-besteffort-pod215ef2ac_986a_4fe6_a85d_c11759880d37.slice - libcontainer container kubepods-besteffort-pod215ef2ac_986a_4fe6_a85d_c11759880d37.slice. Oct 31 14:04:04.335035 systemd[1]: Created slice kubepods-besteffort-pod234f93e5_cb04_4b52_a43f_b06df690a25b.slice - libcontainer container kubepods-besteffort-pod234f93e5_cb04_4b52_a43f_b06df690a25b.slice. Oct 31 14:04:04.341630 systemd[1]: Created slice kubepods-besteffort-pod3ffaded5_6338_4036_9fc4_23fbc0d5fd0b.slice - libcontainer container kubepods-besteffort-pod3ffaded5_6338_4036_9fc4_23fbc0d5fd0b.slice. Oct 31 14:04:04.425318 kubelet[2763]: I1031 14:04:04.424722 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/234f93e5-cb04-4b52-a43f-b06df690a25b-config\") pod \"goldmane-7c778bb748-bl4qr\" (UID: \"234f93e5-cb04-4b52-a43f-b06df690a25b\") " pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:04.425318 kubelet[2763]: I1031 14:04:04.424764 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/234f93e5-cb04-4b52-a43f-b06df690a25b-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-bl4qr\" (UID: \"234f93e5-cb04-4b52-a43f-b06df690a25b\") " pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:04.425318 kubelet[2763]: I1031 14:04:04.424835 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmxcn\" (UniqueName: \"kubernetes.io/projected/234f93e5-cb04-4b52-a43f-b06df690a25b-kube-api-access-wmxcn\") pod \"goldmane-7c778bb748-bl4qr\" (UID: \"234f93e5-cb04-4b52-a43f-b06df690a25b\") " pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:04.425318 kubelet[2763]: I1031 14:04:04.424884 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28gzh\" (UniqueName: \"kubernetes.io/projected/3ffaded5-6338-4036-9fc4-23fbc0d5fd0b-kube-api-access-28gzh\") pod \"calico-apiserver-7694dfd98b-xgdgb\" (UID: \"3ffaded5-6338-4036-9fc4-23fbc0d5fd0b\") " pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" Oct 31 14:04:04.425318 kubelet[2763]: I1031 14:04:04.424901 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/234f93e5-cb04-4b52-a43f-b06df690a25b-goldmane-key-pair\") pod \"goldmane-7c778bb748-bl4qr\" (UID: \"234f93e5-cb04-4b52-a43f-b06df690a25b\") " pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:04.425582 kubelet[2763]: I1031 14:04:04.424998 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ffaded5-6338-4036-9fc4-23fbc0d5fd0b-calico-apiserver-certs\") pod \"calico-apiserver-7694dfd98b-xgdgb\" (UID: \"3ffaded5-6338-4036-9fc4-23fbc0d5fd0b\") " pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" Oct 31 14:04:04.660661 kubelet[2763]: E1031 14:04:04.660290 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:04.661863 containerd[1613]: time="2025-10-31T14:04:04.661772957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-thzch,Uid:f16f1057-636c-439c-9e25-76999ec5fe91,Namespace:kube-system,Attempt:0,}" Oct 31 14:04:04.716094 containerd[1613]: time="2025-10-31T14:04:04.716040339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-9cd4s,Uid:831c9a9e-d727-4252-8c51-c27a6cbc929f,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:04:04.776551 containerd[1613]: time="2025-10-31T14:04:04.776406363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-xgdgb,Uid:3ffaded5-6338-4036-9fc4-23fbc0d5fd0b,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:04:04.814581 containerd[1613]: time="2025-10-31T14:04:04.814512613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d9bc8644-z9ssv,Uid:310217e9-5570-4d9f-976f-99e2e93d2643,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:04.892266 kubelet[2763]: E1031 14:04:04.892202 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:04.894917 containerd[1613]: time="2025-10-31T14:04:04.894776159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tftsn,Uid:77523b44-946d-4f16-81ec-a47f0ef59d93,Namespace:kube-system,Attempt:0,}" Oct 31 14:04:04.908205 containerd[1613]: time="2025-10-31T14:04:04.908117829Z" level=error msg="Failed to destroy network for sandbox \"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:04.982061 containerd[1613]: time="2025-10-31T14:04:04.981914095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bl4qr,Uid:234f93e5-cb04-4b52-a43f-b06df690a25b,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:05.008724 containerd[1613]: time="2025-10-31T14:04:05.008660559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-847f7fcf89-7d45k,Uid:215ef2ac-986a-4fe6-a85d-c11759880d37,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:05.016126 containerd[1613]: time="2025-10-31T14:04:05.016032500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-thzch,Uid:f16f1057-636c-439c-9e25-76999ec5fe91,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.016526 kubelet[2763]: E1031 14:04:05.016373 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.016526 kubelet[2763]: E1031 14:04:05.016468 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-thzch" Oct 31 14:04:05.016526 kubelet[2763]: E1031 14:04:05.016492 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-thzch" Oct 31 14:04:05.016714 kubelet[2763]: E1031 14:04:05.016552 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-thzch_kube-system(f16f1057-636c-439c-9e25-76999ec5fe91)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-thzch_kube-system(f16f1057-636c-439c-9e25-76999ec5fe91)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab8f908b2e59d0df07219b941d5283cf18924b0255f73e27f0de6c1a91b6eab0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-thzch" podUID="f16f1057-636c-439c-9e25-76999ec5fe91" Oct 31 14:04:05.066798 kubelet[2763]: E1031 14:04:05.066634 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:05.067611 containerd[1613]: time="2025-10-31T14:04:05.067551972Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 31 14:04:05.088147 containerd[1613]: time="2025-10-31T14:04:05.087910086Z" level=error msg="Failed to destroy network for sandbox \"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.097920 containerd[1613]: time="2025-10-31T14:04:05.097802469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-9cd4s,Uid:831c9a9e-d727-4252-8c51-c27a6cbc929f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.098652 kubelet[2763]: E1031 14:04:05.098147 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.098652 kubelet[2763]: E1031 14:04:05.098207 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" Oct 31 14:04:05.098652 kubelet[2763]: E1031 14:04:05.098228 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" Oct 31 14:04:05.098954 kubelet[2763]: E1031 14:04:05.098296 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7694dfd98b-9cd4s_calico-apiserver(831c9a9e-d727-4252-8c51-c27a6cbc929f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7694dfd98b-9cd4s_calico-apiserver(831c9a9e-d727-4252-8c51-c27a6cbc929f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1c65989fe141b2c1a4ac393038580e9159ab9dd320f74b960f0003516c962db4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:05.142542 containerd[1613]: time="2025-10-31T14:04:05.142461882Z" level=error msg="Failed to destroy network for sandbox \"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.144684 containerd[1613]: time="2025-10-31T14:04:05.144646635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-xgdgb,Uid:3ffaded5-6338-4036-9fc4-23fbc0d5fd0b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.145096 kubelet[2763]: E1031 14:04:05.145057 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.145232 kubelet[2763]: E1031 14:04:05.145209 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" Oct 31 14:04:05.145328 kubelet[2763]: E1031 14:04:05.145308 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" Oct 31 14:04:05.145480 kubelet[2763]: E1031 14:04:05.145453 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7694dfd98b-xgdgb_calico-apiserver(3ffaded5-6338-4036-9fc4-23fbc0d5fd0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7694dfd98b-xgdgb_calico-apiserver(3ffaded5-6338-4036-9fc4-23fbc0d5fd0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0eca3f00c09859db34fd4b219d8f131c2b9696a894303cfb5f61f103269bb742\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:05.149277 containerd[1613]: time="2025-10-31T14:04:05.149213930Z" level=error msg="Failed to destroy network for sandbox \"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.150628 containerd[1613]: time="2025-10-31T14:04:05.150584673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tftsn,Uid:77523b44-946d-4f16-81ec-a47f0ef59d93,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.151099 kubelet[2763]: E1031 14:04:05.151058 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.151177 kubelet[2763]: E1031 14:04:05.151117 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tftsn" Oct 31 14:04:05.151177 kubelet[2763]: E1031 14:04:05.151141 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-tftsn" Oct 31 14:04:05.151295 kubelet[2763]: E1031 14:04:05.151205 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-tftsn_kube-system(77523b44-946d-4f16-81ec-a47f0ef59d93)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-tftsn_kube-system(77523b44-946d-4f16-81ec-a47f0ef59d93)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9b1d1b3082b90bb857505cf1c80e532cbd7886e608d4b9e961e0d1443aad603\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-tftsn" podUID="77523b44-946d-4f16-81ec-a47f0ef59d93" Oct 31 14:04:05.164644 containerd[1613]: time="2025-10-31T14:04:05.164557893Z" level=error msg="Failed to destroy network for sandbox \"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.166085 containerd[1613]: time="2025-10-31T14:04:05.166030536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d9bc8644-z9ssv,Uid:310217e9-5570-4d9f-976f-99e2e93d2643,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.166376 kubelet[2763]: E1031 14:04:05.166330 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.166452 kubelet[2763]: E1031 14:04:05.166400 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" Oct 31 14:04:05.166452 kubelet[2763]: E1031 14:04:05.166423 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" Oct 31 14:04:05.166525 kubelet[2763]: E1031 14:04:05.166488 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75d9bc8644-z9ssv_calico-system(310217e9-5570-4d9f-976f-99e2e93d2643)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75d9bc8644-z9ssv_calico-system(310217e9-5570-4d9f-976f-99e2e93d2643)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddc67b8d60f8a2c0e5028f136ea7e79c66627fd325f8467439e9c73eee988bd5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:05.170431 containerd[1613]: time="2025-10-31T14:04:05.170377267Z" level=error msg="Failed to destroy network for sandbox \"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.172111 containerd[1613]: time="2025-10-31T14:04:05.172003415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-847f7fcf89-7d45k,Uid:215ef2ac-986a-4fe6-a85d-c11759880d37,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.172438 kubelet[2763]: E1031 14:04:05.172382 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.172438 kubelet[2763]: E1031 14:04:05.172439 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-847f7fcf89-7d45k" Oct 31 14:04:05.172642 kubelet[2763]: E1031 14:04:05.172460 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-847f7fcf89-7d45k" Oct 31 14:04:05.172642 kubelet[2763]: E1031 14:04:05.172521 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-847f7fcf89-7d45k_calico-system(215ef2ac-986a-4fe6-a85d-c11759880d37)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-847f7fcf89-7d45k_calico-system(215ef2ac-986a-4fe6-a85d-c11759880d37)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"606a4c67c26b785d97b42a3831f99c614f36784ddc2aed7ed0f88b23e2c8131b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-847f7fcf89-7d45k" podUID="215ef2ac-986a-4fe6-a85d-c11759880d37" Oct 31 14:04:05.173077 containerd[1613]: time="2025-10-31T14:04:05.173046756Z" level=error msg="Failed to destroy network for sandbox \"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.174493 containerd[1613]: time="2025-10-31T14:04:05.174432100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bl4qr,Uid:234f93e5-cb04-4b52-a43f-b06df690a25b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.174798 kubelet[2763]: E1031 14:04:05.174748 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:05.174798 kubelet[2763]: E1031 14:04:05.174781 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:05.174798 kubelet[2763]: E1031 14:04:05.174796 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-bl4qr" Oct 31 14:04:05.175021 kubelet[2763]: E1031 14:04:05.174837 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-bl4qr_calico-system(234f93e5-cb04-4b52-a43f-b06df690a25b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-bl4qr_calico-system(234f93e5-cb04-4b52-a43f-b06df690a25b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a71b93619114c156903fc14cff6b61d3262a7c599e752ac95cc53720d7cedda\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:05.952272 systemd[1]: Created slice kubepods-besteffort-podd72fcf62_30d2_4a4d_9feb_16a72bc97e14.slice - libcontainer container kubepods-besteffort-podd72fcf62_30d2_4a4d_9feb_16a72bc97e14.slice. Oct 31 14:04:05.972945 containerd[1613]: time="2025-10-31T14:04:05.972878914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpkmr,Uid:d72fcf62-30d2-4a4d-9feb-16a72bc97e14,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:06.035194 containerd[1613]: time="2025-10-31T14:04:06.035117106Z" level=error msg="Failed to destroy network for sandbox \"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:06.037572 systemd[1]: run-netns-cni\x2ddb64c07e\x2ddee1\x2d7c05\x2db746\x2de57f0e71523f.mount: Deactivated successfully. Oct 31 14:04:06.039410 containerd[1613]: time="2025-10-31T14:04:06.039349058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpkmr,Uid:d72fcf62-30d2-4a4d-9feb-16a72bc97e14,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:06.039706 kubelet[2763]: E1031 14:04:06.039660 2763 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 31 14:04:06.039814 kubelet[2763]: E1031 14:04:06.039721 2763 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:04:06.039814 kubelet[2763]: E1031 14:04:06.039750 2763 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rpkmr" Oct 31 14:04:06.039934 kubelet[2763]: E1031 14:04:06.039834 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d052009ae399abde3dea384f96b7c6d567718a0d387ffff48add170d844e260c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:13.515667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4280223963.mount: Deactivated successfully. Oct 31 14:04:14.254393 containerd[1613]: time="2025-10-31T14:04:14.254312158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:14.255212 containerd[1613]: time="2025-10-31T14:04:14.255153307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 31 14:04:14.256579 containerd[1613]: time="2025-10-31T14:04:14.256535032Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:14.258467 containerd[1613]: time="2025-10-31T14:04:14.258426230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 31 14:04:14.258992 containerd[1613]: time="2025-10-31T14:04:14.258942287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 9.191336175s" Oct 31 14:04:14.259046 containerd[1613]: time="2025-10-31T14:04:14.258994231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 31 14:04:14.269263 containerd[1613]: time="2025-10-31T14:04:14.269220771Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 31 14:04:14.279165 containerd[1613]: time="2025-10-31T14:04:14.279114341Z" level=info msg="Container 590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:04:14.290000 containerd[1613]: time="2025-10-31T14:04:14.289955396Z" level=info msg="CreateContainer within sandbox \"90547afb5a67aeba112de1470f06abfcc3986404741066c2eee5a446a12c253e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\"" Oct 31 14:04:14.290559 containerd[1613]: time="2025-10-31T14:04:14.290534819Z" level=info msg="StartContainer for \"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\"" Oct 31 14:04:14.292055 containerd[1613]: time="2025-10-31T14:04:14.292018479Z" level=info msg="connecting to shim 590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc" address="unix:///run/containerd/s/0bc3b46a36af60f98607000854eb8c31a066c9c01039bcdd332644834b02273e" protocol=ttrpc version=3 Oct 31 14:04:14.319247 systemd[1]: Started cri-containerd-590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc.scope - libcontainer container 590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc. Oct 31 14:04:14.459843 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 31 14:04:14.460056 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 31 14:04:14.577590 containerd[1613]: time="2025-10-31T14:04:14.577470526Z" level=info msg="StartContainer for \"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\" returns successfully" Oct 31 14:04:14.693879 kubelet[2763]: I1031 14:04:14.693579 2763 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-backend-key-pair\") pod \"215ef2ac-986a-4fe6-a85d-c11759880d37\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " Oct 31 14:04:14.693879 kubelet[2763]: I1031 14:04:14.693647 2763 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-ca-bundle\") pod \"215ef2ac-986a-4fe6-a85d-c11759880d37\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " Oct 31 14:04:14.693879 kubelet[2763]: I1031 14:04:14.693665 2763 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rt9bq\" (UniqueName: \"kubernetes.io/projected/215ef2ac-986a-4fe6-a85d-c11759880d37-kube-api-access-rt9bq\") pod \"215ef2ac-986a-4fe6-a85d-c11759880d37\" (UID: \"215ef2ac-986a-4fe6-a85d-c11759880d37\") " Oct 31 14:04:14.695867 kubelet[2763]: I1031 14:04:14.695023 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "215ef2ac-986a-4fe6-a85d-c11759880d37" (UID: "215ef2ac-986a-4fe6-a85d-c11759880d37"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 31 14:04:14.701934 systemd[1]: var-lib-kubelet-pods-215ef2ac\x2d986a\x2d4fe6\x2da85d\x2dc11759880d37-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 31 14:04:14.705820 systemd[1]: var-lib-kubelet-pods-215ef2ac\x2d986a\x2d4fe6\x2da85d\x2dc11759880d37-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drt9bq.mount: Deactivated successfully. Oct 31 14:04:14.705950 kubelet[2763]: I1031 14:04:14.703242 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "215ef2ac-986a-4fe6-a85d-c11759880d37" (UID: "215ef2ac-986a-4fe6-a85d-c11759880d37"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 31 14:04:14.707161 kubelet[2763]: I1031 14:04:14.707134 2763 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215ef2ac-986a-4fe6-a85d-c11759880d37-kube-api-access-rt9bq" (OuterVolumeSpecName: "kube-api-access-rt9bq") pod "215ef2ac-986a-4fe6-a85d-c11759880d37" (UID: "215ef2ac-986a-4fe6-a85d-c11759880d37"). InnerVolumeSpecName "kube-api-access-rt9bq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 31 14:04:14.794672 kubelet[2763]: I1031 14:04:14.794598 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 31 14:04:14.794672 kubelet[2763]: I1031 14:04:14.794650 2763 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rt9bq\" (UniqueName: \"kubernetes.io/projected/215ef2ac-986a-4fe6-a85d-c11759880d37-kube-api-access-rt9bq\") on node \"localhost\" DevicePath \"\"" Oct 31 14:04:14.794672 kubelet[2763]: I1031 14:04:14.794664 2763 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/215ef2ac-986a-4fe6-a85d-c11759880d37-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 31 14:04:14.955154 systemd[1]: Removed slice kubepods-besteffort-pod215ef2ac_986a_4fe6_a85d_c11759880d37.slice - libcontainer container kubepods-besteffort-pod215ef2ac_986a_4fe6_a85d_c11759880d37.slice. Oct 31 14:04:15.090439 kubelet[2763]: E1031 14:04:15.090387 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:15.105362 kubelet[2763]: I1031 14:04:15.105292 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vz57s" podStartSLOduration=2.168170926 podStartE2EDuration="20.105267016s" podCreationTimestamp="2025-10-31 14:03:55 +0000 UTC" firstStartedPulling="2025-10-31 14:03:56.322816406 +0000 UTC m=+19.475421874" lastFinishedPulling="2025-10-31 14:04:14.259912496 +0000 UTC m=+37.412517964" observedRunningTime="2025-10-31 14:04:15.104677164 +0000 UTC m=+38.257282622" watchObservedRunningTime="2025-10-31 14:04:15.105267016 +0000 UTC m=+38.257872484" Oct 31 14:04:15.154794 systemd[1]: Created slice kubepods-besteffort-podcbbbd567_9df7_46cd_88ad_c52cb886a0d1.slice - libcontainer container kubepods-besteffort-podcbbbd567_9df7_46cd_88ad_c52cb886a0d1.slice. Oct 31 14:04:15.197739 kubelet[2763]: I1031 14:04:15.197651 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cbbbd567-9df7-46cd-88ad-c52cb886a0d1-whisker-ca-bundle\") pod \"whisker-7dd95d9845-xmgkr\" (UID: \"cbbbd567-9df7-46cd-88ad-c52cb886a0d1\") " pod="calico-system/whisker-7dd95d9845-xmgkr" Oct 31 14:04:15.197941 kubelet[2763]: I1031 14:04:15.197745 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cbbbd567-9df7-46cd-88ad-c52cb886a0d1-whisker-backend-key-pair\") pod \"whisker-7dd95d9845-xmgkr\" (UID: \"cbbbd567-9df7-46cd-88ad-c52cb886a0d1\") " pod="calico-system/whisker-7dd95d9845-xmgkr" Oct 31 14:04:15.197941 kubelet[2763]: I1031 14:04:15.197789 2763 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgw5s\" (UniqueName: \"kubernetes.io/projected/cbbbd567-9df7-46cd-88ad-c52cb886a0d1-kube-api-access-xgw5s\") pod \"whisker-7dd95d9845-xmgkr\" (UID: \"cbbbd567-9df7-46cd-88ad-c52cb886a0d1\") " pod="calico-system/whisker-7dd95d9845-xmgkr" Oct 31 14:04:15.462364 containerd[1613]: time="2025-10-31T14:04:15.462292139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd95d9845-xmgkr,Uid:cbbbd567-9df7-46cd-88ad-c52cb886a0d1,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:15.605789 systemd-networkd[1520]: calia4c4a442f42: Link UP Oct 31 14:04:15.606085 systemd-networkd[1520]: calia4c4a442f42: Gained carrier Oct 31 14:04:15.622493 containerd[1613]: 2025-10-31 14:04:15.484 [INFO][3903] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 14:04:15.622493 containerd[1613]: 2025-10-31 14:04:15.502 [INFO][3903] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7dd95d9845--xmgkr-eth0 whisker-7dd95d9845- calico-system cbbbd567-9df7-46cd-88ad-c52cb886a0d1 962 0 2025-10-31 14:04:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7dd95d9845 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7dd95d9845-xmgkr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia4c4a442f42 [] [] }} ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-" Oct 31 14:04:15.622493 containerd[1613]: 2025-10-31 14:04:15.503 [INFO][3903] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.622493 containerd[1613]: 2025-10-31 14:04:15.558 [INFO][3918] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" HandleID="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Workload="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.559 [INFO][3918] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" HandleID="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Workload="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7dd95d9845-xmgkr", "timestamp":"2025-10-31 14:04:15.558794778 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.559 [INFO][3918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.559 [INFO][3918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.560 [INFO][3918] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.568 [INFO][3918] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" host="localhost" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.574 [INFO][3918] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.579 [INFO][3918] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.581 [INFO][3918] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.583 [INFO][3918] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:15.622803 containerd[1613]: 2025-10-31 14:04:15.583 [INFO][3918] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" host="localhost" Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.584 [INFO][3918] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.588 [INFO][3918] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" host="localhost" Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.594 [INFO][3918] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" host="localhost" Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.594 [INFO][3918] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" host="localhost" Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.594 [INFO][3918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:15.623126 containerd[1613]: 2025-10-31 14:04:15.594 [INFO][3918] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" HandleID="k8s-pod-network.d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Workload="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.623306 containerd[1613]: 2025-10-31 14:04:15.598 [INFO][3903] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd95d9845--xmgkr-eth0", GenerateName:"whisker-7dd95d9845-", Namespace:"calico-system", SelfLink:"", UID:"cbbbd567-9df7-46cd-88ad-c52cb886a0d1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd95d9845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7dd95d9845-xmgkr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4c4a442f42", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:15.623306 containerd[1613]: 2025-10-31 14:04:15.598 [INFO][3903] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.623416 containerd[1613]: 2025-10-31 14:04:15.598 [INFO][3903] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4c4a442f42 ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.623416 containerd[1613]: 2025-10-31 14:04:15.606 [INFO][3903] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.623485 containerd[1613]: 2025-10-31 14:04:15.606 [INFO][3903] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7dd95d9845--xmgkr-eth0", GenerateName:"whisker-7dd95d9845-", Namespace:"calico-system", SelfLink:"", UID:"cbbbd567-9df7-46cd-88ad-c52cb886a0d1", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7dd95d9845", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a", Pod:"whisker-7dd95d9845-xmgkr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia4c4a442f42", MAC:"1e:62:3e:02:5b:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:15.623559 containerd[1613]: 2025-10-31 14:04:15.617 [INFO][3903] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" Namespace="calico-system" Pod="whisker-7dd95d9845-xmgkr" WorkloadEndpoint="localhost-k8s-whisker--7dd95d9845--xmgkr-eth0" Oct 31 14:04:15.786779 containerd[1613]: time="2025-10-31T14:04:15.786456810Z" level=info msg="connecting to shim d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a" address="unix:///run/containerd/s/f532919da794cc912f2717509c5ba08cc48d271ead2a81425f229af03f626ade" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:15.823984 systemd[1]: Started cri-containerd-d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a.scope - libcontainer container d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a. Oct 31 14:04:15.836433 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:15.889066 containerd[1613]: time="2025-10-31T14:04:15.888960177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7dd95d9845-xmgkr,Uid:cbbbd567-9df7-46cd-88ad-c52cb886a0d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"d23cbf68615dd389ee99e4abf1ed3762777d5db453a9d1a7d776cbce1ad1b42a\"" Oct 31 14:04:15.896085 containerd[1613]: time="2025-10-31T14:04:15.896041991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:04:15.949700 kubelet[2763]: E1031 14:04:15.949635 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:15.951045 containerd[1613]: time="2025-10-31T14:04:15.951003500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tftsn,Uid:77523b44-946d-4f16-81ec-a47f0ef59d93,Namespace:kube-system,Attempt:0,}" Oct 31 14:04:16.134063 systemd-networkd[1520]: cali09d055cfd59: Link UP Oct 31 14:04:16.135250 systemd-networkd[1520]: cali09d055cfd59: Gained carrier Oct 31 14:04:16.151119 containerd[1613]: 2025-10-31 14:04:16.028 [INFO][4075] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 31 14:04:16.151119 containerd[1613]: 2025-10-31 14:04:16.047 [INFO][4075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--tftsn-eth0 coredns-66bc5c9577- kube-system 77523b44-946d-4f16-81ec-a47f0ef59d93 886 0 2025-10-31 14:03:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-tftsn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09d055cfd59 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-" Oct 31 14:04:16.151119 containerd[1613]: 2025-10-31 14:04:16.048 [INFO][4075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.151119 containerd[1613]: 2025-10-31 14:04:16.094 [INFO][4095] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" HandleID="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Workload="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.095 [INFO][4095] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" HandleID="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Workload="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122610), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-tftsn", "timestamp":"2025-10-31 14:04:16.094635216 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.095 [INFO][4095] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.095 [INFO][4095] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.095 [INFO][4095] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.104 [INFO][4095] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" host="localhost" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.108 [INFO][4095] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.111 [INFO][4095] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.113 [INFO][4095] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.115 [INFO][4095] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:16.151526 containerd[1613]: 2025-10-31 14:04:16.115 [INFO][4095] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" host="localhost" Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.117 [INFO][4095] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.120 [INFO][4095] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" host="localhost" Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.127 [INFO][4095] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" host="localhost" Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.127 [INFO][4095] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" host="localhost" Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.127 [INFO][4095] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:16.151826 containerd[1613]: 2025-10-31 14:04:16.127 [INFO][4095] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" HandleID="k8s-pod-network.7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Workload="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.131 [INFO][4075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tftsn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"77523b44-946d-4f16-81ec-a47f0ef59d93", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-tftsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09d055cfd59", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.131 [INFO][4075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.131 [INFO][4075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09d055cfd59 ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.134 [INFO][4075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.135 [INFO][4075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--tftsn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"77523b44-946d-4f16-81ec-a47f0ef59d93", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d", Pod:"coredns-66bc5c9577-tftsn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09d055cfd59", MAC:"da:64:18:65:2d:69", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:16.152099 containerd[1613]: 2025-10-31 14:04:16.146 [INFO][4075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" Namespace="kube-system" Pod="coredns-66bc5c9577-tftsn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--tftsn-eth0" Oct 31 14:04:16.210903 containerd[1613]: time="2025-10-31T14:04:16.210215768Z" level=info msg="connecting to shim 7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d" address="unix:///run/containerd/s/fa0d9faff656cb8852ffa512d65158809da8045a685d8a45400c9d4370bb8587" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:16.238600 containerd[1613]: time="2025-10-31T14:04:16.238550365Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:16.247985 systemd[1]: Started cri-containerd-7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d.scope - libcontainer container 7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d. Oct 31 14:04:16.261914 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:16.356780 containerd[1613]: time="2025-10-31T14:04:16.356706518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:04:16.370287 containerd[1613]: time="2025-10-31T14:04:16.370211832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:04:16.370995 kubelet[2763]: E1031 14:04:16.370547 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:16.370995 kubelet[2763]: E1031 14:04:16.370617 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:16.370995 kubelet[2763]: E1031 14:04:16.370716 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:16.372142 containerd[1613]: time="2025-10-31T14:04:16.372089718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:04:16.430940 systemd-networkd[1520]: vxlan.calico: Link UP Oct 31 14:04:16.430950 systemd-networkd[1520]: vxlan.calico: Gained carrier Oct 31 14:04:16.598119 containerd[1613]: time="2025-10-31T14:04:16.598038827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-tftsn,Uid:77523b44-946d-4f16-81ec-a47f0ef59d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d\"" Oct 31 14:04:16.598928 kubelet[2763]: E1031 14:04:16.598900 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:16.616900 containerd[1613]: time="2025-10-31T14:04:16.616867760Z" level=info msg="CreateContainer within sandbox \"7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 14:04:16.645877 containerd[1613]: time="2025-10-31T14:04:16.644133278Z" level=info msg="Container eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:04:16.646452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2142392448.mount: Deactivated successfully. Oct 31 14:04:16.652213 containerd[1613]: time="2025-10-31T14:04:16.652149904Z" level=info msg="CreateContainer within sandbox \"7172429d50431cbf8d8fde4e7f6434232d0e7b1a9c83d18bef8d7282142c6d8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc\"" Oct 31 14:04:16.654127 containerd[1613]: time="2025-10-31T14:04:16.654078150Z" level=info msg="StartContainer for \"eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc\"" Oct 31 14:04:16.655429 containerd[1613]: time="2025-10-31T14:04:16.655395936Z" level=info msg="connecting to shim eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc" address="unix:///run/containerd/s/fa0d9faff656cb8852ffa512d65158809da8045a685d8a45400c9d4370bb8587" protocol=ttrpc version=3 Oct 31 14:04:16.687175 systemd[1]: Started cri-containerd-eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc.scope - libcontainer container eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc. Oct 31 14:04:16.744885 containerd[1613]: time="2025-10-31T14:04:16.744530368Z" level=info msg="StartContainer for \"eaa03e96a5bd5bb2768efe792787fa3e45713c8b3368e2285ff7dd39c7b845cc\" returns successfully" Oct 31 14:04:16.844020 systemd-networkd[1520]: calia4c4a442f42: Gained IPv6LL Oct 31 14:04:16.879998 containerd[1613]: time="2025-10-31T14:04:16.879931196Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:16.884512 containerd[1613]: time="2025-10-31T14:04:16.884445236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:04:16.884651 containerd[1613]: time="2025-10-31T14:04:16.884551688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:04:16.884882 kubelet[2763]: E1031 14:04:16.884765 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:16.884882 kubelet[2763]: E1031 14:04:16.884825 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:16.884984 kubelet[2763]: E1031 14:04:16.884951 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:16.885040 kubelet[2763]: E1031 14:04:16.885008 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:04:16.948460 kubelet[2763]: I1031 14:04:16.948329 2763 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215ef2ac-986a-4fe6-a85d-c11759880d37" path="/var/lib/kubelet/pods/215ef2ac-986a-4fe6-a85d-c11759880d37/volumes" Oct 31 14:04:16.956192 containerd[1613]: time="2025-10-31T14:04:16.956143240Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpkmr,Uid:d72fcf62-30d2-4a4d-9feb-16a72bc97e14,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:16.958077 containerd[1613]: time="2025-10-31T14:04:16.957951497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-9cd4s,Uid:831c9a9e-d727-4252-8c51-c27a6cbc929f,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:04:16.961316 kubelet[2763]: E1031 14:04:16.961265 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:16.962341 containerd[1613]: time="2025-10-31T14:04:16.961994103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-thzch,Uid:f16f1057-636c-439c-9e25-76999ec5fe91,Namespace:kube-system,Attempt:0,}" Oct 31 14:04:17.083289 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:55994.service - OpenSSH per-connection server daemon (10.0.0.1:55994). Oct 31 14:04:17.104774 kubelet[2763]: E1031 14:04:17.104699 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:17.108998 kubelet[2763]: E1031 14:04:17.108947 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:04:17.124408 systemd-networkd[1520]: calif1d980df14e: Link UP Oct 31 14:04:17.124614 systemd-networkd[1520]: calif1d980df14e: Gained carrier Oct 31 14:04:17.136340 kubelet[2763]: I1031 14:04:17.135931 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tftsn" podStartSLOduration=34.135906517 podStartE2EDuration="34.135906517s" podCreationTimestamp="2025-10-31 14:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:04:17.129352734 +0000 UTC m=+40.281958222" watchObservedRunningTime="2025-10-31 14:04:17.135906517 +0000 UTC m=+40.288511985" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.018 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0 calico-apiserver-7694dfd98b- calico-apiserver 831c9a9e-d727-4252-8c51-c27a6cbc929f 885 0 2025-10-31 14:03:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7694dfd98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7694dfd98b-9cd4s eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif1d980df14e [] [] }} ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.018 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.063 [INFO][4336] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" HandleID="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Workload="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.066 [INFO][4336] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" HandleID="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Workload="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7694dfd98b-9cd4s", "timestamp":"2025-10-31 14:04:17.063115641 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.066 [INFO][4336] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.066 [INFO][4336] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.066 [INFO][4336] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.076 [INFO][4336] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.085 [INFO][4336] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.090 [INFO][4336] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.092 [INFO][4336] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.094 [INFO][4336] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.094 [INFO][4336] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.095 [INFO][4336] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87 Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.102 [INFO][4336] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.115 [INFO][4336] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.115 [INFO][4336] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" host="localhost" Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.115 [INFO][4336] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:17.158256 containerd[1613]: 2025-10-31 14:04:17.115 [INFO][4336] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" HandleID="k8s-pod-network.c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Workload="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.119 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0", GenerateName:"calico-apiserver-7694dfd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c9a9e-d727-4252-8c51-c27a6cbc929f", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7694dfd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7694dfd98b-9cd4s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1d980df14e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.119 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.119 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1d980df14e ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.125 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.125 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0", GenerateName:"calico-apiserver-7694dfd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"831c9a9e-d727-4252-8c51-c27a6cbc929f", ResourceVersion:"885", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7694dfd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87", Pod:"calico-apiserver-7694dfd98b-9cd4s", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif1d980df14e", MAC:"fe:8e:fd:3b:a6:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.159447 containerd[1613]: 2025-10-31 14:04:17.154 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-9cd4s" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--9cd4s-eth0" Oct 31 14:04:17.186442 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 55994 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:17.189786 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:17.199150 systemd-logind[1600]: New session 8 of user core. Oct 31 14:04:17.207028 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 31 14:04:17.226204 containerd[1613]: time="2025-10-31T14:04:17.225816280Z" level=info msg="connecting to shim c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87" address="unix:///run/containerd/s/1555695f53266ed2d6d8955b974e96fcc4ac666d44b983caa4c5dfc47097f948" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:17.240804 systemd-networkd[1520]: cali4c5da4af37b: Link UP Oct 31 14:04:17.241459 systemd-networkd[1520]: cali4c5da4af37b: Gained carrier Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.014 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--rpkmr-eth0 csi-node-driver- calico-system d72fcf62-30d2-4a4d-9feb-16a72bc97e14 770 0 2025-10-31 14:03:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-rpkmr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4c5da4af37b [] [] }} ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.015 [INFO][4289] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.067 [INFO][4334] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" HandleID="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Workload="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.067 [INFO][4334] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" HandleID="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Workload="localhost-k8s-csi--node--driver--rpkmr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7250), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-rpkmr", "timestamp":"2025-10-31 14:04:17.067562599 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.067 [INFO][4334] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.117 [INFO][4334] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.117 [INFO][4334] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.179 [INFO][4334] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.199 [INFO][4334] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.204 [INFO][4334] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.206 [INFO][4334] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.209 [INFO][4334] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.209 [INFO][4334] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.210 [INFO][4334] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6 Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.216 [INFO][4334] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4334] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4334] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" host="localhost" Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4334] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:17.266879 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4334] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" HandleID="k8s-pod-network.2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Workload="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.266045 systemd[1]: Started cri-containerd-c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87.scope - libcontainer container c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87. Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.237 [INFO][4289] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rpkmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d72fcf62-30d2-4a4d-9feb-16a72bc97e14", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-rpkmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c5da4af37b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.237 [INFO][4289] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.237 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c5da4af37b ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.241 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.241 [INFO][4289] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--rpkmr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d72fcf62-30d2-4a4d-9feb-16a72bc97e14", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6", Pod:"csi-node-driver-rpkmr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4c5da4af37b", MAC:"8a:4d:26:12:4a:80", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.267537 containerd[1613]: 2025-10-31 14:04:17.256 [INFO][4289] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" Namespace="calico-system" Pod="csi-node-driver-rpkmr" WorkloadEndpoint="localhost-k8s-csi--node--driver--rpkmr-eth0" Oct 31 14:04:17.313538 containerd[1613]: time="2025-10-31T14:04:17.313474658Z" level=info msg="connecting to shim 2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6" address="unix:///run/containerd/s/962a3e78bb6128b31461774b1c40a32e1a65c075ba67f47ff0f5b85fb5e5916f" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:17.321122 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:17.348083 systemd[1]: Started cri-containerd-2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6.scope - libcontainer container 2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6. Oct 31 14:04:17.375700 sshd[4379]: Connection closed by 10.0.0.1 port 55994 Oct 31 14:04:17.375270 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:17.379553 systemd-networkd[1520]: cali1813c569520: Link UP Oct 31 14:04:17.381924 systemd-networkd[1520]: cali1813c569520: Gained carrier Oct 31 14:04:17.386687 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:55994.service: Deactivated successfully. Oct 31 14:04:17.388951 systemd-logind[1600]: Session 8 logged out. Waiting for processes to exit. Oct 31 14:04:17.390514 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:17.391005 systemd[1]: session-8.scope: Deactivated successfully. Oct 31 14:04:17.400334 systemd-logind[1600]: Removed session 8. Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.023 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--thzch-eth0 coredns-66bc5c9577- kube-system f16f1057-636c-439c-9e25-76999ec5fe91 878 0 2025-10-31 14:03:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-thzch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1813c569520 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.024 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.082 [INFO][4342] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" HandleID="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Workload="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.084 [INFO][4342] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" HandleID="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Workload="localhost-k8s-coredns--66bc5c9577--thzch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00019f580), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-thzch", "timestamp":"2025-10-31 14:04:17.082665109 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.085 [INFO][4342] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4342] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.225 [INFO][4342] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.280 [INFO][4342] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.302 [INFO][4342] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.318 [INFO][4342] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.322 [INFO][4342] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.329 [INFO][4342] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.330 [INFO][4342] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.334 [INFO][4342] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.343 [INFO][4342] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.354 [INFO][4342] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.355 [INFO][4342] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" host="localhost" Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.355 [INFO][4342] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:17.405238 containerd[1613]: 2025-10-31 14:04:17.355 [INFO][4342] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" HandleID="k8s-pod-network.273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Workload="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.361 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--thzch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f16f1057-636c-439c-9e25-76999ec5fe91", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-thzch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1813c569520", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.362 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.362 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1813c569520 ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.383 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.384 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--thzch-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f16f1057-636c-439c-9e25-76999ec5fe91", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f", Pod:"coredns-66bc5c9577-thzch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1813c569520", MAC:"62:b0:dd:05:61:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:17.406128 containerd[1613]: 2025-10-31 14:04:17.399 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" Namespace="kube-system" Pod="coredns-66bc5c9577-thzch" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--thzch-eth0" Oct 31 14:04:17.409565 containerd[1613]: time="2025-10-31T14:04:17.409536322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-9cd4s,Uid:831c9a9e-d727-4252-8c51-c27a6cbc929f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c78f8b055a20f34969cdc3c55426798c9bf50775346318ae24f5f76c0f3a9c87\"" Oct 31 14:04:17.412085 containerd[1613]: time="2025-10-31T14:04:17.412065672Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:17.421364 containerd[1613]: time="2025-10-31T14:04:17.421319274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rpkmr,Uid:d72fcf62-30d2-4a4d-9feb-16a72bc97e14,Namespace:calico-system,Attempt:0,} returns sandbox id \"2deaddb1c3d228e273affdb09199870431593a49c106392fc70ed947ebc99aa6\"" Oct 31 14:04:17.434707 containerd[1613]: time="2025-10-31T14:04:17.434586080Z" level=info msg="connecting to shim 273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f" address="unix:///run/containerd/s/8dab05bf2fa2a86f3caaad80c22200e41872f213e79a09428d2a54295557d0b4" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:17.464032 systemd[1]: Started cri-containerd-273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f.scope - libcontainer container 273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f. Oct 31 14:04:17.480126 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:17.511814 containerd[1613]: time="2025-10-31T14:04:17.511762272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-thzch,Uid:f16f1057-636c-439c-9e25-76999ec5fe91,Namespace:kube-system,Attempt:0,} returns sandbox id \"273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f\"" Oct 31 14:04:17.512620 kubelet[2763]: E1031 14:04:17.512596 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:17.517103 containerd[1613]: time="2025-10-31T14:04:17.517041521Z" level=info msg="CreateContainer within sandbox \"273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 31 14:04:17.551899 containerd[1613]: time="2025-10-31T14:04:17.550209055Z" level=info msg="Container b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee: CDI devices from CRI Config.CDIDevices: []" Oct 31 14:04:17.554657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount606501504.mount: Deactivated successfully. Oct 31 14:04:17.594604 containerd[1613]: time="2025-10-31T14:04:17.594552240Z" level=info msg="CreateContainer within sandbox \"273d726f464dc884ac9d51e1668bd05f5dedda2b419514ecb18a0b06c458854f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee\"" Oct 31 14:04:17.595131 containerd[1613]: time="2025-10-31T14:04:17.595091406Z" level=info msg="StartContainer for \"b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee\"" Oct 31 14:04:17.595948 containerd[1613]: time="2025-10-31T14:04:17.595924269Z" level=info msg="connecting to shim b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee" address="unix:///run/containerd/s/8dab05bf2fa2a86f3caaad80c22200e41872f213e79a09428d2a54295557d0b4" protocol=ttrpc version=3 Oct 31 14:04:17.618145 systemd[1]: Started cri-containerd-b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee.scope - libcontainer container b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee. Oct 31 14:04:17.663019 containerd[1613]: time="2025-10-31T14:04:17.662932272Z" level=info msg="StartContainer for \"b18395ce78572ff5d6ef15f46f31b01f97477d378423d2a5ee62474696406cee\" returns successfully" Oct 31 14:04:17.754861 containerd[1613]: time="2025-10-31T14:04:17.754699502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:17.755900 containerd[1613]: time="2025-10-31T14:04:17.755820430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:17.755900 containerd[1613]: time="2025-10-31T14:04:17.755885781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:17.756202 kubelet[2763]: E1031 14:04:17.756141 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:17.756274 kubelet[2763]: E1031 14:04:17.756207 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:17.756407 kubelet[2763]: E1031 14:04:17.756378 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-9cd4s_calico-apiserver(831c9a9e-d727-4252-8c51-c27a6cbc929f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:17.756465 kubelet[2763]: E1031 14:04:17.756422 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:17.756609 containerd[1613]: time="2025-10-31T14:04:17.756582461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:04:17.804029 systemd-networkd[1520]: cali09d055cfd59: Gained IPv6LL Oct 31 14:04:17.948531 containerd[1613]: time="2025-10-31T14:04:17.948474818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bl4qr,Uid:234f93e5-cb04-4b52-a43f-b06df690a25b,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:17.950306 containerd[1613]: time="2025-10-31T14:04:17.950277588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-xgdgb,Uid:3ffaded5-6338-4036-9fc4-23fbc0d5fd0b,Namespace:calico-apiserver,Attempt:0,}" Oct 31 14:04:18.061912 systemd-networkd[1520]: cali5998c1d0bb8: Link UP Oct 31 14:04:18.062984 systemd-networkd[1520]: cali5998c1d0bb8: Gained carrier Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:17.986 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--bl4qr-eth0 goldmane-7c778bb748- calico-system 234f93e5-cb04-4b52-a43f-b06df690a25b 887 0 2025-10-31 14:03:54 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-bl4qr eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali5998c1d0bb8 [] [] }} ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:17.987 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.027 [INFO][4626] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" HandleID="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Workload="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.027 [INFO][4626] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" HandleID="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Workload="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c75c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-bl4qr", "timestamp":"2025-10-31 14:04:18.027049111 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.027 [INFO][4626] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.027 [INFO][4626] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.027 [INFO][4626] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.033 [INFO][4626] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.037 [INFO][4626] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.041 [INFO][4626] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.043 [INFO][4626] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.046 [INFO][4626] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.046 [INFO][4626] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.047 [INFO][4626] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43 Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.050 [INFO][4626] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4626] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4626] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" host="localhost" Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4626] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:18.078273 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4626] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" HandleID="k8s-pod-network.aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Workload="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.058 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bl4qr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"234f93e5-cb04-4b52-a43f-b06df690a25b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-bl4qr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5998c1d0bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.058 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.058 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5998c1d0bb8 ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.062 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.063 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--bl4qr-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"234f93e5-cb04-4b52-a43f-b06df690a25b", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43", Pod:"goldmane-7c778bb748-bl4qr", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali5998c1d0bb8", MAC:"66:e5:ce:76:2d:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:18.078958 containerd[1613]: 2025-10-31 14:04:18.072 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" Namespace="calico-system" Pod="goldmane-7c778bb748-bl4qr" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--bl4qr-eth0" Oct 31 14:04:18.092972 containerd[1613]: time="2025-10-31T14:04:18.092927189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:18.094074 containerd[1613]: time="2025-10-31T14:04:18.093944837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:04:18.094074 containerd[1613]: time="2025-10-31T14:04:18.094005257Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:04:18.094277 kubelet[2763]: E1031 14:04:18.094236 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:18.094687 kubelet[2763]: E1031 14:04:18.094289 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:18.094687 kubelet[2763]: E1031 14:04:18.094375 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:18.095463 containerd[1613]: time="2025-10-31T14:04:18.095430116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:04:18.108077 kubelet[2763]: E1031 14:04:18.107786 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:18.110332 kubelet[2763]: E1031 14:04:18.110312 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:18.111098 kubelet[2763]: E1031 14:04:18.111056 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:18.118250 containerd[1613]: time="2025-10-31T14:04:18.118177903Z" level=info msg="connecting to shim aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43" address="unix:///run/containerd/s/8239006f57bbbe045ad1719bc1a2541ddc149086398a93a0ecd16d1c029d88dd" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:18.124270 systemd-networkd[1520]: vxlan.calico: Gained IPv6LL Oct 31 14:04:18.141142 kubelet[2763]: I1031 14:04:18.140465 2763 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-thzch" podStartSLOduration=35.140402207 podStartE2EDuration="35.140402207s" podCreationTimestamp="2025-10-31 14:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-31 14:04:18.121820001 +0000 UTC m=+41.274425499" watchObservedRunningTime="2025-10-31 14:04:18.140402207 +0000 UTC m=+41.293007675" Oct 31 14:04:18.159030 systemd[1]: Started cri-containerd-aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43.scope - libcontainer container aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43. Oct 31 14:04:18.175117 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:18.192839 systemd-networkd[1520]: cali4e4bff6ca2c: Link UP Oct 31 14:04:18.194082 systemd-networkd[1520]: cali4e4bff6ca2c: Gained carrier Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:17.989 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0 calico-apiserver-7694dfd98b- calico-apiserver 3ffaded5-6338-4036-9fc4-23fbc0d5fd0b 888 0 2025-10-31 14:03:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7694dfd98b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7694dfd98b-xgdgb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4e4bff6ca2c [] [] }} ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:17.989 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.028 [INFO][4633] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" HandleID="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Workload="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.028 [INFO][4633] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" HandleID="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Workload="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f4b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7694dfd98b-xgdgb", "timestamp":"2025-10-31 14:04:18.028269524 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.028 [INFO][4633] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4633] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.055 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.134 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.149 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.159 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.166 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.169 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.169 [INFO][4633] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.172 [INFO][4633] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306 Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.177 [INFO][4633] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.185 [INFO][4633] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.185 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" host="localhost" Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.185 [INFO][4633] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:18.218425 containerd[1613]: 2025-10-31 14:04:18.185 [INFO][4633] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" HandleID="k8s-pod-network.d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Workload="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.188 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0", GenerateName:"calico-apiserver-7694dfd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffaded5-6338-4036-9fc4-23fbc0d5fd0b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7694dfd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7694dfd98b-xgdgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e4bff6ca2c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.189 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.189 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e4bff6ca2c ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.194 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.195 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0", GenerateName:"calico-apiserver-7694dfd98b-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ffaded5-6338-4036-9fc4-23fbc0d5fd0b", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7694dfd98b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306", Pod:"calico-apiserver-7694dfd98b-xgdgb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4e4bff6ca2c", MAC:"42:dd:49:54:25:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:18.219022 containerd[1613]: 2025-10-31 14:04:18.208 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" Namespace="calico-apiserver" Pod="calico-apiserver-7694dfd98b-xgdgb" WorkloadEndpoint="localhost-k8s-calico--apiserver--7694dfd98b--xgdgb-eth0" Oct 31 14:04:18.228367 containerd[1613]: time="2025-10-31T14:04:18.228333667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-bl4qr,Uid:234f93e5-cb04-4b52-a43f-b06df690a25b,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa006371884a0e4d171554a53d9bff6199bc08dc111e5e760aff4e67c652dc43\"" Oct 31 14:04:18.243788 containerd[1613]: time="2025-10-31T14:04:18.243741820Z" level=info msg="connecting to shim d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306" address="unix:///run/containerd/s/d917f79df3c60f4832ac0d2a941fa43d68b21aa63f1958cf48f7f0655bf6b99c" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:18.278001 systemd[1]: Started cri-containerd-d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306.scope - libcontainer container d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306. Oct 31 14:04:18.293877 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:18.330467 containerd[1613]: time="2025-10-31T14:04:18.330262990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7694dfd98b-xgdgb,Uid:3ffaded5-6338-4036-9fc4-23fbc0d5fd0b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d3c55a2fa8eb657cd14723b8e5bd23b810ba69e520cdf5c899c27f5003638306\"" Oct 31 14:04:18.445221 containerd[1613]: time="2025-10-31T14:04:18.445162476Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:18.447465 containerd[1613]: time="2025-10-31T14:04:18.447365837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:04:18.447465 containerd[1613]: time="2025-10-31T14:04:18.447442159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:04:18.447816 kubelet[2763]: E1031 14:04:18.447725 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:18.447816 kubelet[2763]: E1031 14:04:18.447804 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:18.448060 kubelet[2763]: E1031 14:04:18.448007 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:18.448247 kubelet[2763]: E1031 14:04:18.448072 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:18.448337 containerd[1613]: time="2025-10-31T14:04:18.448175440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:04:18.795084 containerd[1613]: time="2025-10-31T14:04:18.794920557Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:18.796293 containerd[1613]: time="2025-10-31T14:04:18.796254656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:04:18.796398 containerd[1613]: time="2025-10-31T14:04:18.796336349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:18.796552 kubelet[2763]: E1031 14:04:18.796509 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:18.796640 kubelet[2763]: E1031 14:04:18.796568 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:18.796965 kubelet[2763]: E1031 14:04:18.796835 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bl4qr_calico-system(234f93e5-cb04-4b52-a43f-b06df690a25b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:18.796965 kubelet[2763]: E1031 14:04:18.796916 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:18.797090 containerd[1613]: time="2025-10-31T14:04:18.797055492Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:18.956080 systemd-networkd[1520]: cali1813c569520: Gained IPv6LL Oct 31 14:04:19.116442 kubelet[2763]: E1031 14:04:19.116196 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:19.116869 kubelet[2763]: E1031 14:04:19.116741 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:19.117598 kubelet[2763]: E1031 14:04:19.117009 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:19.117598 kubelet[2763]: E1031 14:04:19.117443 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:19.119294 kubelet[2763]: E1031 14:04:19.117944 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:19.144230 containerd[1613]: time="2025-10-31T14:04:19.144166153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:19.145401 containerd[1613]: time="2025-10-31T14:04:19.145354717Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:19.145519 containerd[1613]: time="2025-10-31T14:04:19.145375699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:19.145706 kubelet[2763]: E1031 14:04:19.145643 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:19.145706 kubelet[2763]: E1031 14:04:19.145689 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:19.145935 kubelet[2763]: E1031 14:04:19.145907 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-xgdgb_calico-apiserver(3ffaded5-6338-4036-9fc4-23fbc0d5fd0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:19.145935 kubelet[2763]: E1031 14:04:19.145945 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:19.149111 systemd-networkd[1520]: calif1d980df14e: Gained IPv6LL Oct 31 14:04:19.276043 systemd-networkd[1520]: cali4c5da4af37b: Gained IPv6LL Oct 31 14:04:19.532135 systemd-networkd[1520]: cali5998c1d0bb8: Gained IPv6LL Oct 31 14:04:19.949178 containerd[1613]: time="2025-10-31T14:04:19.949117556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d9bc8644-z9ssv,Uid:310217e9-5570-4d9f-976f-99e2e93d2643,Namespace:calico-system,Attempt:0,}" Oct 31 14:04:20.056183 systemd-networkd[1520]: cali2ea1f6f94c1: Link UP Oct 31 14:04:20.057593 systemd-networkd[1520]: cali2ea1f6f94c1: Gained carrier Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:19.988 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0 calico-kube-controllers-75d9bc8644- calico-system 310217e9-5570-4d9f-976f-99e2e93d2643 884 0 2025-10-31 14:03:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75d9bc8644 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-75d9bc8644-z9ssv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2ea1f6f94c1 [] [] }} ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:19.988 [INFO][4758] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.017 [INFO][4772] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" HandleID="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Workload="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.017 [INFO][4772] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" HandleID="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Workload="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135860), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-75d9bc8644-z9ssv", "timestamp":"2025-10-31 14:04:20.017404744 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.017 [INFO][4772] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.017 [INFO][4772] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.017 [INFO][4772] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.023 [INFO][4772] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.027 [INFO][4772] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.031 [INFO][4772] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.033 [INFO][4772] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.035 [INFO][4772] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.035 [INFO][4772] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.036 [INFO][4772] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74 Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.040 [INFO][4772] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.047 [INFO][4772] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.048 [INFO][4772] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" host="localhost" Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.048 [INFO][4772] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 31 14:04:20.070547 containerd[1613]: 2025-10-31 14:04:20.048 [INFO][4772] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" HandleID="k8s-pod-network.12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Workload="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.051 [INFO][4758] cni-plugin/k8s.go 418: Populated endpoint ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0", GenerateName:"calico-kube-controllers-75d9bc8644-", Namespace:"calico-system", SelfLink:"", UID:"310217e9-5570-4d9f-976f-99e2e93d2643", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d9bc8644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-75d9bc8644-z9ssv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ea1f6f94c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.051 [INFO][4758] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.051 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ea1f6f94c1 ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.057 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.058 [INFO][4758] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0", GenerateName:"calico-kube-controllers-75d9bc8644-", Namespace:"calico-system", SelfLink:"", UID:"310217e9-5570-4d9f-976f-99e2e93d2643", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.October, 31, 14, 3, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75d9bc8644", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74", Pod:"calico-kube-controllers-75d9bc8644-z9ssv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2ea1f6f94c1", MAC:"a2:13:2e:93:5e:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 31 14:04:20.071176 containerd[1613]: 2025-10-31 14:04:20.066 [INFO][4758] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" Namespace="calico-system" Pod="calico-kube-controllers-75d9bc8644-z9ssv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--75d9bc8644--z9ssv-eth0" Oct 31 14:04:20.095068 containerd[1613]: time="2025-10-31T14:04:20.095013947Z" level=info msg="connecting to shim 12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74" address="unix:///run/containerd/s/be80a091ed256b070c13146f84aea12680a85f0f939488e53c6dfc9be8f032cd" namespace=k8s.io protocol=ttrpc version=3 Oct 31 14:04:20.108120 systemd-networkd[1520]: cali4e4bff6ca2c: Gained IPv6LL Oct 31 14:04:20.121420 kubelet[2763]: E1031 14:04:20.121232 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:20.121775 kubelet[2763]: E1031 14:04:20.121505 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:20.134423 systemd[1]: Started cri-containerd-12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74.scope - libcontainer container 12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74. Oct 31 14:04:20.157870 systemd-resolved[1310]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 31 14:04:20.196481 containerd[1613]: time="2025-10-31T14:04:20.196422451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75d9bc8644-z9ssv,Uid:310217e9-5570-4d9f-976f-99e2e93d2643,Namespace:calico-system,Attempt:0,} returns sandbox id \"12fec3b52a70a4957fa5c67a9065aeac9e7788b1a6b509dc5db3815f9973ef74\"" Oct 31 14:04:20.203712 containerd[1613]: time="2025-10-31T14:04:20.203514481Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:04:20.549893 containerd[1613]: time="2025-10-31T14:04:20.549607084Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:20.551186 containerd[1613]: time="2025-10-31T14:04:20.551042914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:04:20.551248 containerd[1613]: time="2025-10-31T14:04:20.551120608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:04:20.551799 kubelet[2763]: E1031 14:04:20.551530 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:04:20.551799 kubelet[2763]: E1031 14:04:20.551594 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:04:20.551799 kubelet[2763]: E1031 14:04:20.551695 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75d9bc8644-z9ssv_calico-system(310217e9-5570-4d9f-976f-99e2e93d2643): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:20.551799 kubelet[2763]: E1031 14:04:20.551730 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:21.124932 kubelet[2763]: E1031 14:04:21.124750 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:21.836260 systemd-networkd[1520]: cali2ea1f6f94c1: Gained IPv6LL Oct 31 14:04:22.127335 kubelet[2763]: E1031 14:04:22.127172 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:22.392163 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:56002.service - OpenSSH per-connection server daemon (10.0.0.1:56002). Oct 31 14:04:22.470520 sshd[4848]: Accepted publickey for core from 10.0.0.1 port 56002 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:22.472243 sshd-session[4848]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:22.476764 systemd-logind[1600]: New session 9 of user core. Oct 31 14:04:22.485025 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 31 14:04:22.586796 sshd[4851]: Connection closed by 10.0.0.1 port 56002 Oct 31 14:04:22.587282 sshd-session[4848]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:22.592181 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:56002.service: Deactivated successfully. Oct 31 14:04:22.595205 systemd[1]: session-9.scope: Deactivated successfully. Oct 31 14:04:22.596494 systemd-logind[1600]: Session 9 logged out. Waiting for processes to exit. Oct 31 14:04:22.599762 systemd-logind[1600]: Removed session 9. Oct 31 14:04:27.607020 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Oct 31 14:04:27.670783 sshd[4876]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:27.672961 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:27.677932 systemd-logind[1600]: New session 10 of user core. Oct 31 14:04:27.693156 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 31 14:04:28.072636 sshd[4879]: Connection closed by 10.0.0.1 port 49516 Oct 31 14:04:28.072968 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:28.079691 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:49516.service: Deactivated successfully. Oct 31 14:04:28.082340 systemd[1]: session-10.scope: Deactivated successfully. Oct 31 14:04:28.083409 systemd-logind[1600]: Session 10 logged out. Waiting for processes to exit. Oct 31 14:04:28.085738 systemd-logind[1600]: Removed session 10. Oct 31 14:04:29.947082 containerd[1613]: time="2025-10-31T14:04:29.947035819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:04:30.533759 containerd[1613]: time="2025-10-31T14:04:30.533699180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:30.619084 containerd[1613]: time="2025-10-31T14:04:30.619021213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:04:30.619084 containerd[1613]: time="2025-10-31T14:04:30.619068495Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:04:30.619365 kubelet[2763]: E1031 14:04:30.619309 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:30.619760 kubelet[2763]: E1031 14:04:30.619372 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:30.619760 kubelet[2763]: E1031 14:04:30.619551 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:30.620040 containerd[1613]: time="2025-10-31T14:04:30.619945209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:04:31.008530 containerd[1613]: time="2025-10-31T14:04:31.008446864Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:31.037315 containerd[1613]: time="2025-10-31T14:04:31.037247470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:04:31.037315 containerd[1613]: time="2025-10-31T14:04:31.037290314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:04:31.037652 kubelet[2763]: E1031 14:04:31.037580 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:31.037809 kubelet[2763]: E1031 14:04:31.037652 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:31.038307 kubelet[2763]: E1031 14:04:31.037963 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:31.038366 containerd[1613]: time="2025-10-31T14:04:31.038080437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:04:31.457905 containerd[1613]: time="2025-10-31T14:04:31.457820435Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:31.459087 containerd[1613]: time="2025-10-31T14:04:31.459044646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:04:31.459153 containerd[1613]: time="2025-10-31T14:04:31.459103181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:04:31.459361 kubelet[2763]: E1031 14:04:31.459308 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:31.459415 kubelet[2763]: E1031 14:04:31.459374 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:31.459584 kubelet[2763]: E1031 14:04:31.459555 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:31.459715 kubelet[2763]: E1031 14:04:31.459606 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:31.459790 containerd[1613]: time="2025-10-31T14:04:31.459753941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:04:31.792547 containerd[1613]: time="2025-10-31T14:04:31.792358310Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:31.793678 containerd[1613]: time="2025-10-31T14:04:31.793635003Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:04:31.793777 containerd[1613]: time="2025-10-31T14:04:31.793720429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:04:31.793956 kubelet[2763]: E1031 14:04:31.793907 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:31.794296 kubelet[2763]: E1031 14:04:31.793967 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:31.794296 kubelet[2763]: E1031 14:04:31.794065 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:31.794296 kubelet[2763]: E1031 14:04:31.794106 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:04:32.949840 containerd[1613]: time="2025-10-31T14:04:32.948046848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:04:33.098672 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:49532.service - OpenSSH per-connection server daemon (10.0.0.1:49532). Oct 31 14:04:33.151490 sshd[4895]: Accepted publickey for core from 10.0.0.1 port 49532 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:33.152870 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:33.156969 systemd-logind[1600]: New session 11 of user core. Oct 31 14:04:33.167991 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 31 14:04:33.309760 containerd[1613]: time="2025-10-31T14:04:33.309603307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:33.363469 sshd[4898]: Connection closed by 10.0.0.1 port 49532 Oct 31 14:04:33.364068 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:33.372936 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:49532.service: Deactivated successfully. Oct 31 14:04:33.375169 systemd[1]: session-11.scope: Deactivated successfully. Oct 31 14:04:33.376208 systemd-logind[1600]: Session 11 logged out. Waiting for processes to exit. Oct 31 14:04:33.379734 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:49534.service - OpenSSH per-connection server daemon (10.0.0.1:49534). Oct 31 14:04:33.380647 systemd-logind[1600]: Removed session 11. Oct 31 14:04:33.390094 containerd[1613]: time="2025-10-31T14:04:33.389925278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:04:33.390094 containerd[1613]: time="2025-10-31T14:04:33.389968312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:33.390280 kubelet[2763]: E1031 14:04:33.390230 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:33.390711 kubelet[2763]: E1031 14:04:33.390294 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:33.390711 kubelet[2763]: E1031 14:04:33.390379 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bl4qr_calico-system(234f93e5-cb04-4b52-a43f-b06df690a25b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:33.390711 kubelet[2763]: E1031 14:04:33.390427 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:33.448628 sshd[4912]: Accepted publickey for core from 10.0.0.1 port 49534 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:33.450760 sshd-session[4912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:33.455753 systemd-logind[1600]: New session 12 of user core. Oct 31 14:04:33.464017 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 31 14:04:33.585316 sshd[4915]: Connection closed by 10.0.0.1 port 49534 Oct 31 14:04:33.585772 sshd-session[4912]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:33.599255 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:49534.service: Deactivated successfully. Oct 31 14:04:33.603070 systemd[1]: session-12.scope: Deactivated successfully. Oct 31 14:04:33.606929 systemd-logind[1600]: Session 12 logged out. Waiting for processes to exit. Oct 31 14:04:33.610672 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:49544.service - OpenSSH per-connection server daemon (10.0.0.1:49544). Oct 31 14:04:33.612564 systemd-logind[1600]: Removed session 12. Oct 31 14:04:33.666226 sshd[4927]: Accepted publickey for core from 10.0.0.1 port 49544 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:33.667702 sshd-session[4927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:33.672569 systemd-logind[1600]: New session 13 of user core. Oct 31 14:04:33.682003 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 31 14:04:33.753202 sshd[4930]: Connection closed by 10.0.0.1 port 49544 Oct 31 14:04:33.753576 sshd-session[4927]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:33.758272 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:49544.service: Deactivated successfully. Oct 31 14:04:33.760511 systemd[1]: session-13.scope: Deactivated successfully. Oct 31 14:04:33.761354 systemd-logind[1600]: Session 13 logged out. Waiting for processes to exit. Oct 31 14:04:33.762462 systemd-logind[1600]: Removed session 13. Oct 31 14:04:33.947003 containerd[1613]: time="2025-10-31T14:04:33.946679099Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:34.289676 containerd[1613]: time="2025-10-31T14:04:34.289510987Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:34.290702 containerd[1613]: time="2025-10-31T14:04:34.290670002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:34.290777 containerd[1613]: time="2025-10-31T14:04:34.290739487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:34.290926 kubelet[2763]: E1031 14:04:34.290887 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:34.290984 kubelet[2763]: E1031 14:04:34.290937 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:34.291049 kubelet[2763]: E1031 14:04:34.291022 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-9cd4s_calico-apiserver(831c9a9e-d727-4252-8c51-c27a6cbc929f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:34.291115 kubelet[2763]: E1031 14:04:34.291061 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:34.947096 containerd[1613]: time="2025-10-31T14:04:34.946779443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:35.288994 containerd[1613]: time="2025-10-31T14:04:35.288830563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:35.290044 containerd[1613]: time="2025-10-31T14:04:35.289999325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:35.290481 containerd[1613]: time="2025-10-31T14:04:35.290085683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:35.290516 kubelet[2763]: E1031 14:04:35.290222 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:35.290516 kubelet[2763]: E1031 14:04:35.290274 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:35.290516 kubelet[2763]: E1031 14:04:35.290353 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-xgdgb_calico-apiserver(3ffaded5-6338-4036-9fc4-23fbc0d5fd0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:35.290516 kubelet[2763]: E1031 14:04:35.290383 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:35.946967 containerd[1613]: time="2025-10-31T14:04:35.946619321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:04:36.457443 containerd[1613]: time="2025-10-31T14:04:36.457362693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:36.498578 containerd[1613]: time="2025-10-31T14:04:36.498467450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:04:36.498726 containerd[1613]: time="2025-10-31T14:04:36.498526495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:04:36.498826 kubelet[2763]: E1031 14:04:36.498779 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:04:36.499216 kubelet[2763]: E1031 14:04:36.498827 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:04:36.499216 kubelet[2763]: E1031 14:04:36.498925 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75d9bc8644-z9ssv_calico-system(310217e9-5570-4d9f-976f-99e2e93d2643): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:36.499216 kubelet[2763]: E1031 14:04:36.498959 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:38.771288 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:36076.service - OpenSSH per-connection server daemon (10.0.0.1:36076). Oct 31 14:04:38.831651 sshd[4957]: Accepted publickey for core from 10.0.0.1 port 36076 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:38.833728 sshd-session[4957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:38.838642 systemd-logind[1600]: New session 14 of user core. Oct 31 14:04:38.852003 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 31 14:04:38.932332 sshd[4960]: Connection closed by 10.0.0.1 port 36076 Oct 31 14:04:38.932694 sshd-session[4957]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:38.937571 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:36076.service: Deactivated successfully. Oct 31 14:04:38.939744 systemd[1]: session-14.scope: Deactivated successfully. Oct 31 14:04:38.940651 systemd-logind[1600]: Session 14 logged out. Waiting for processes to exit. Oct 31 14:04:38.942221 systemd-logind[1600]: Removed session 14. Oct 31 14:04:41.947568 kubelet[2763]: E1031 14:04:41.947518 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:04:43.946470 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:36080.service - OpenSSH per-connection server daemon (10.0.0.1:36080). Oct 31 14:04:43.948517 kubelet[2763]: E1031 14:04:43.947768 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:44.010056 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 36080 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:44.012225 sshd-session[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:44.017189 systemd-logind[1600]: New session 15 of user core. Oct 31 14:04:44.025035 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 31 14:04:44.105959 sshd[4982]: Connection closed by 10.0.0.1 port 36080 Oct 31 14:04:44.106490 sshd-session[4979]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:44.112296 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:36080.service: Deactivated successfully. Oct 31 14:04:44.115305 systemd[1]: session-15.scope: Deactivated successfully. Oct 31 14:04:44.116729 systemd-logind[1600]: Session 15 logged out. Waiting for processes to exit. Oct 31 14:04:44.118317 systemd-logind[1600]: Removed session 15. Oct 31 14:04:44.946932 kubelet[2763]: E1031 14:04:44.946797 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:44.946932 kubelet[2763]: E1031 14:04:44.946805 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:45.091779 kubelet[2763]: E1031 14:04:45.091725 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:45.223657 containerd[1613]: time="2025-10-31T14:04:45.223529270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\" id:\"05b33b427f034368488f1f5739eec12151725dedda865162b8c7f9d64021dc2b\" pid:5007 exited_at:{seconds:1761919485 nanos:223162725}" Oct 31 14:04:45.227013 kubelet[2763]: E1031 14:04:45.226987 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:04:45.300515 containerd[1613]: time="2025-10-31T14:04:45.300475349Z" level=info msg="TaskExit event in podsandbox handler container_id:\"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\" id:\"f8f9b30154c75d7354849489e2c5396890aa7c4ce8662891cb944a58f05bdb9e\" pid:5033 exited_at:{seconds:1761919485 nanos:300201913}" Oct 31 14:04:45.946920 kubelet[2763]: E1031 14:04:45.946595 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:46.947140 kubelet[2763]: E1031 14:04:46.947084 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:04:49.128939 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:60362.service - OpenSSH per-connection server daemon (10.0.0.1:60362). Oct 31 14:04:49.196100 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 60362 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:49.198236 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:49.203415 systemd-logind[1600]: New session 16 of user core. Oct 31 14:04:49.213153 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 31 14:04:49.314046 sshd[5052]: Connection closed by 10.0.0.1 port 60362 Oct 31 14:04:49.314412 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:49.320078 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:60362.service: Deactivated successfully. Oct 31 14:04:49.322610 systemd[1]: session-16.scope: Deactivated successfully. Oct 31 14:04:49.323616 systemd-logind[1600]: Session 16 logged out. Waiting for processes to exit. Oct 31 14:04:49.325138 systemd-logind[1600]: Removed session 16. Oct 31 14:04:53.947023 containerd[1613]: time="2025-10-31T14:04:53.946947659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 31 14:04:54.322866 containerd[1613]: time="2025-10-31T14:04:54.322702605Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:54.324018 containerd[1613]: time="2025-10-31T14:04:54.323981581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 31 14:04:54.324108 containerd[1613]: time="2025-10-31T14:04:54.324060142Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 31 14:04:54.324291 kubelet[2763]: E1031 14:04:54.324235 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:54.324790 kubelet[2763]: E1031 14:04:54.324301 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 31 14:04:54.324790 kubelet[2763]: E1031 14:04:54.324418 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:54.325192 containerd[1613]: time="2025-10-31T14:04:54.325166198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 31 14:04:54.328572 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:60364.service - OpenSSH per-connection server daemon (10.0.0.1:60364). Oct 31 14:04:54.378100 sshd[5065]: Accepted publickey for core from 10.0.0.1 port 60364 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:54.380236 sshd-session[5065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:54.385449 systemd-logind[1600]: New session 17 of user core. Oct 31 14:04:54.393006 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 31 14:04:54.480162 sshd[5068]: Connection closed by 10.0.0.1 port 60364 Oct 31 14:04:54.480554 sshd-session[5065]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:54.485149 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:60364.service: Deactivated successfully. Oct 31 14:04:54.487323 systemd[1]: session-17.scope: Deactivated successfully. Oct 31 14:04:54.488182 systemd-logind[1600]: Session 17 logged out. Waiting for processes to exit. Oct 31 14:04:54.489380 systemd-logind[1600]: Removed session 17. Oct 31 14:04:54.680934 containerd[1613]: time="2025-10-31T14:04:54.680884303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:54.682053 containerd[1613]: time="2025-10-31T14:04:54.682016119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 31 14:04:54.682119 containerd[1613]: time="2025-10-31T14:04:54.682074190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 31 14:04:54.682305 kubelet[2763]: E1031 14:04:54.682248 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:54.682395 kubelet[2763]: E1031 14:04:54.682311 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 31 14:04:54.682438 kubelet[2763]: E1031 14:04:54.682391 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-7dd95d9845-xmgkr_calico-system(cbbbd567-9df7-46cd-88ad-c52cb886a0d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:54.682472 kubelet[2763]: E1031 14:04:54.682430 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:04:54.947972 containerd[1613]: time="2025-10-31T14:04:54.947820623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 31 14:04:55.279409 containerd[1613]: time="2025-10-31T14:04:55.279247801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:55.280728 containerd[1613]: time="2025-10-31T14:04:55.280685910Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 31 14:04:55.280790 containerd[1613]: time="2025-10-31T14:04:55.280752067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 31 14:04:55.281004 kubelet[2763]: E1031 14:04:55.280938 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:55.281076 kubelet[2763]: E1031 14:04:55.281005 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 31 14:04:55.281131 kubelet[2763]: E1031 14:04:55.281104 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:55.282040 containerd[1613]: time="2025-10-31T14:04:55.282006484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 31 14:04:55.762812 containerd[1613]: time="2025-10-31T14:04:55.762749558Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:55.764100 containerd[1613]: time="2025-10-31T14:04:55.764058481Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 31 14:04:55.764190 containerd[1613]: time="2025-10-31T14:04:55.764097325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 31 14:04:55.764376 kubelet[2763]: E1031 14:04:55.764324 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:55.764836 kubelet[2763]: E1031 14:04:55.764385 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 31 14:04:55.764836 kubelet[2763]: E1031 14:04:55.764484 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-rpkmr_calico-system(d72fcf62-30d2-4a4d-9feb-16a72bc97e14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:55.764836 kubelet[2763]: E1031 14:04:55.764536 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:04:56.948187 containerd[1613]: time="2025-10-31T14:04:56.948090361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 31 14:04:57.262425 containerd[1613]: time="2025-10-31T14:04:57.262299623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:57.263428 containerd[1613]: time="2025-10-31T14:04:57.263386818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 31 14:04:57.263428 containerd[1613]: time="2025-10-31T14:04:57.263418690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:57.263613 kubelet[2763]: E1031 14:04:57.263546 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:57.263613 kubelet[2763]: E1031 14:04:57.263583 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 31 14:04:57.264035 kubelet[2763]: E1031 14:04:57.263656 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-bl4qr_calico-system(234f93e5-cb04-4b52-a43f-b06df690a25b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:57.264035 kubelet[2763]: E1031 14:04:57.263687 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:04:57.947369 containerd[1613]: time="2025-10-31T14:04:57.947293711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:58.313812 containerd[1613]: time="2025-10-31T14:04:58.313651513Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:58.314957 containerd[1613]: time="2025-10-31T14:04:58.314900286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:58.315006 containerd[1613]: time="2025-10-31T14:04:58.314975790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:58.315213 kubelet[2763]: E1031 14:04:58.315166 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:58.315487 kubelet[2763]: E1031 14:04:58.315225 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:58.315487 kubelet[2763]: E1031 14:04:58.315463 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-9cd4s_calico-apiserver(831c9a9e-d727-4252-8c51-c27a6cbc929f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:58.315538 kubelet[2763]: E1031 14:04:58.315513 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:04:58.315766 containerd[1613]: time="2025-10-31T14:04:58.315727274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 31 14:04:58.685474 containerd[1613]: time="2025-10-31T14:04:58.685393762Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:04:58.713376 containerd[1613]: time="2025-10-31T14:04:58.713299854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 31 14:04:58.713544 containerd[1613]: time="2025-10-31T14:04:58.713333288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 31 14:04:58.713675 kubelet[2763]: E1031 14:04:58.713613 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:58.713727 kubelet[2763]: E1031 14:04:58.713683 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 31 14:04:58.713817 kubelet[2763]: E1031 14:04:58.713789 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-7694dfd98b-xgdgb_calico-apiserver(3ffaded5-6338-4036-9fc4-23fbc0d5fd0b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 31 14:04:58.713958 kubelet[2763]: E1031 14:04:58.713835 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:04:59.493110 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:36340.service - OpenSSH per-connection server daemon (10.0.0.1:36340). Oct 31 14:04:59.562785 sshd[5087]: Accepted publickey for core from 10.0.0.1 port 36340 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:59.564552 sshd-session[5087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:59.569221 systemd-logind[1600]: New session 18 of user core. Oct 31 14:04:59.582027 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 31 14:04:59.668443 sshd[5090]: Connection closed by 10.0.0.1 port 36340 Oct 31 14:04:59.669149 sshd-session[5087]: pam_unix(sshd:session): session closed for user core Oct 31 14:04:59.680908 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:36340.service: Deactivated successfully. Oct 31 14:04:59.683374 systemd[1]: session-18.scope: Deactivated successfully. Oct 31 14:04:59.684536 systemd-logind[1600]: Session 18 logged out. Waiting for processes to exit. Oct 31 14:04:59.688371 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:36354.service - OpenSSH per-connection server daemon (10.0.0.1:36354). Oct 31 14:04:59.689315 systemd-logind[1600]: Removed session 18. Oct 31 14:04:59.748641 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 36354 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:04:59.749989 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:04:59.754786 systemd-logind[1600]: New session 19 of user core. Oct 31 14:04:59.762024 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 31 14:05:00.027945 sshd[5107]: Connection closed by 10.0.0.1 port 36354 Oct 31 14:05:00.029293 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:00.038673 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:36354.service: Deactivated successfully. Oct 31 14:05:00.040896 systemd[1]: session-19.scope: Deactivated successfully. Oct 31 14:05:00.041842 systemd-logind[1600]: Session 19 logged out. Waiting for processes to exit. Oct 31 14:05:00.045344 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:36360.service - OpenSSH per-connection server daemon (10.0.0.1:36360). Oct 31 14:05:00.046568 systemd-logind[1600]: Removed session 19. Oct 31 14:05:00.113793 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 36360 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:00.115201 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:00.119894 systemd-logind[1600]: New session 20 of user core. Oct 31 14:05:00.136219 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 31 14:05:00.601407 sshd[5123]: Connection closed by 10.0.0.1 port 36360 Oct 31 14:05:00.601977 sshd-session[5118]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:00.614211 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:36360.service: Deactivated successfully. Oct 31 14:05:00.616502 systemd[1]: session-20.scope: Deactivated successfully. Oct 31 14:05:00.618859 systemd-logind[1600]: Session 20 logged out. Waiting for processes to exit. Oct 31 14:05:00.624174 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:36374.service - OpenSSH per-connection server daemon (10.0.0.1:36374). Oct 31 14:05:00.625706 systemd-logind[1600]: Removed session 20. Oct 31 14:05:00.680734 sshd[5143]: Accepted publickey for core from 10.0.0.1 port 36374 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:00.682096 sshd-session[5143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:00.687240 systemd-logind[1600]: New session 21 of user core. Oct 31 14:05:00.698011 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 31 14:05:00.870412 sshd[5146]: Connection closed by 10.0.0.1 port 36374 Oct 31 14:05:00.871220 sshd-session[5143]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:00.884135 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:36374.service: Deactivated successfully. Oct 31 14:05:00.887032 systemd[1]: session-21.scope: Deactivated successfully. Oct 31 14:05:00.887898 systemd-logind[1600]: Session 21 logged out. Waiting for processes to exit. Oct 31 14:05:00.891343 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:36390.service - OpenSSH per-connection server daemon (10.0.0.1:36390). Oct 31 14:05:00.892026 systemd-logind[1600]: Removed session 21. Oct 31 14:05:00.942085 sshd[5158]: Accepted publickey for core from 10.0.0.1 port 36390 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:00.943917 sshd-session[5158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:00.949646 containerd[1613]: time="2025-10-31T14:05:00.949276490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 31 14:05:00.951685 systemd-logind[1600]: New session 22 of user core. Oct 31 14:05:00.956110 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 31 14:05:01.037324 sshd[5161]: Connection closed by 10.0.0.1 port 36390 Oct 31 14:05:01.037812 sshd-session[5158]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:01.043379 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:36390.service: Deactivated successfully. Oct 31 14:05:01.045701 systemd[1]: session-22.scope: Deactivated successfully. Oct 31 14:05:01.046539 systemd-logind[1600]: Session 22 logged out. Waiting for processes to exit. Oct 31 14:05:01.048503 systemd-logind[1600]: Removed session 22. Oct 31 14:05:01.531945 containerd[1613]: time="2025-10-31T14:05:01.531881032Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 31 14:05:01.533130 containerd[1613]: time="2025-10-31T14:05:01.533090716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 31 14:05:01.533212 containerd[1613]: time="2025-10-31T14:05:01.533151672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 31 14:05:01.533888 kubelet[2763]: E1031 14:05:01.533335 2763 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:05:01.533888 kubelet[2763]: E1031 14:05:01.533391 2763 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 31 14:05:01.533888 kubelet[2763]: E1031 14:05:01.533469 2763 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-75d9bc8644-z9ssv_calico-system(310217e9-5570-4d9f-976f-99e2e93d2643): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 31 14:05:01.533888 kubelet[2763]: E1031 14:05:01.533501 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:05:02.946015 kubelet[2763]: E1031 14:05:02.945925 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:05:06.054261 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:60550.service - OpenSSH per-connection server daemon (10.0.0.1:60550). Oct 31 14:05:06.114836 sshd[5179]: Accepted publickey for core from 10.0.0.1 port 60550 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:06.116679 sshd-session[5179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:06.121954 systemd-logind[1600]: New session 23 of user core. Oct 31 14:05:06.132067 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 31 14:05:06.212534 sshd[5182]: Connection closed by 10.0.0.1 port 60550 Oct 31 14:05:06.212887 sshd-session[5179]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:06.218441 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:60550.service: Deactivated successfully. Oct 31 14:05:06.220871 systemd[1]: session-23.scope: Deactivated successfully. Oct 31 14:05:06.221808 systemd-logind[1600]: Session 23 logged out. Waiting for processes to exit. Oct 31 14:05:06.223675 systemd-logind[1600]: Removed session 23. Oct 31 14:05:08.946191 kubelet[2763]: E1031 14:05:08.945931 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:05:08.946191 kubelet[2763]: E1031 14:05:08.945975 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:05:08.948501 kubelet[2763]: E1031 14:05:08.948440 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7dd95d9845-xmgkr" podUID="cbbbd567-9df7-46cd-88ad-c52cb886a0d1" Oct 31 14:05:09.945670 kubelet[2763]: E1031 14:05:09.945619 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:05:09.945926 kubelet[2763]: E1031 14:05:09.945616 2763 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 31 14:05:09.946920 kubelet[2763]: E1031 14:05:09.946843 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rpkmr" podUID="d72fcf62-30d2-4a4d-9feb-16a72bc97e14" Oct 31 14:05:10.947285 kubelet[2763]: E1031 14:05:10.947200 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-xgdgb" podUID="3ffaded5-6338-4036-9fc4-23fbc0d5fd0b" Oct 31 14:05:11.228635 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Oct 31 14:05:11.282805 sshd[5196]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:11.284434 sshd-session[5196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:11.290235 systemd-logind[1600]: New session 24 of user core. Oct 31 14:05:11.304992 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 31 14:05:11.373345 sshd[5199]: Connection closed by 10.0.0.1 port 60554 Oct 31 14:05:11.373653 sshd-session[5196]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:11.377718 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:60554.service: Deactivated successfully. Oct 31 14:05:11.379748 systemd[1]: session-24.scope: Deactivated successfully. Oct 31 14:05:11.380458 systemd-logind[1600]: Session 24 logged out. Waiting for processes to exit. Oct 31 14:05:11.381704 systemd-logind[1600]: Removed session 24. Oct 31 14:05:12.947418 kubelet[2763]: E1031 14:05:12.947116 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7694dfd98b-9cd4s" podUID="831c9a9e-d727-4252-8c51-c27a6cbc929f" Oct 31 14:05:12.948903 kubelet[2763]: E1031 14:05:12.948833 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-bl4qr" podUID="234f93e5-cb04-4b52-a43f-b06df690a25b" Oct 31 14:05:14.956316 kubelet[2763]: E1031 14:05:14.956186 2763 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-75d9bc8644-z9ssv" podUID="310217e9-5570-4d9f-976f-99e2e93d2643" Oct 31 14:05:15.373807 containerd[1613]: time="2025-10-31T14:05:15.373703945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"590b7d74cc3730dddb81d9da4bc767e402622075d8a567981aa6ebd1dc5043bc\" id:\"94a10a0acb1ca296676fdc051a1dd41c6e6025f78fc4aedbb7b7676a5a6999dd\" pid:5225 exited_at:{seconds:1761919515 nanos:372912110}" Oct 31 14:05:16.389976 systemd[1]: Started sshd@24-10.0.0.39:22-10.0.0.1:45674.service - OpenSSH per-connection server daemon (10.0.0.1:45674). Oct 31 14:05:16.496044 sshd[5238]: Accepted publickey for core from 10.0.0.1 port 45674 ssh2: RSA SHA256:vB+C50gRw4XKWL1h6W1ZxwGTSFbLNajJZ+GX1JbmfgQ Oct 31 14:05:16.498761 sshd-session[5238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 31 14:05:16.508235 systemd-logind[1600]: New session 25 of user core. Oct 31 14:05:16.516210 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 31 14:05:16.672035 sshd[5241]: Connection closed by 10.0.0.1 port 45674 Oct 31 14:05:16.673119 sshd-session[5238]: pam_unix(sshd:session): session closed for user core Oct 31 14:05:16.679783 systemd[1]: sshd@24-10.0.0.39:22-10.0.0.1:45674.service: Deactivated successfully. Oct 31 14:05:16.683518 systemd[1]: session-25.scope: Deactivated successfully. Oct 31 14:05:16.684779 systemd-logind[1600]: Session 25 logged out. Waiting for processes to exit. Oct 31 14:05:16.687473 systemd-logind[1600]: Removed session 25.