Oct 30 00:02:20.548054 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Oct 29 22:08:54 -00 2025 Oct 30 00:02:20.548099 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:02:20.548118 kernel: BIOS-provided physical RAM map: Oct 30 00:02:20.548125 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 00:02:20.548132 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 00:02:20.548139 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 00:02:20.548147 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 00:02:20.548155 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 00:02:20.548165 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 00:02:20.548172 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 00:02:20.548192 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Oct 30 00:02:20.548199 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 00:02:20.548207 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 00:02:20.548215 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 00:02:20.548223 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 00:02:20.548234 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 00:02:20.548245 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 00:02:20.548253 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 00:02:20.548260 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 00:02:20.548268 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 00:02:20.548275 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 00:02:20.548283 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 00:02:20.548290 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 00:02:20.548298 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:02:20.548306 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 00:02:20.548315 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 00:02:20.548323 kernel: NX (Execute Disable) protection: active Oct 30 00:02:20.548331 kernel: APIC: Static calls initialized Oct 30 00:02:20.548338 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Oct 30 00:02:20.548346 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Oct 30 00:02:20.548354 kernel: extended physical RAM map: Oct 30 00:02:20.548361 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Oct 30 00:02:20.548369 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Oct 30 00:02:20.548377 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Oct 30 00:02:20.548384 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Oct 30 00:02:20.548392 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Oct 30 00:02:20.548402 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Oct 30 00:02:20.548410 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Oct 30 00:02:20.548417 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Oct 30 00:02:20.548425 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Oct 30 00:02:20.548436 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Oct 30 00:02:20.548446 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Oct 30 00:02:20.548454 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Oct 30 00:02:20.548462 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Oct 30 00:02:20.548470 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Oct 30 00:02:20.548478 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Oct 30 00:02:20.548486 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Oct 30 00:02:20.548494 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Oct 30 00:02:20.548501 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Oct 30 00:02:20.548511 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Oct 30 00:02:20.548519 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Oct 30 00:02:20.548527 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Oct 30 00:02:20.548535 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Oct 30 00:02:20.548543 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Oct 30 00:02:20.548550 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Oct 30 00:02:20.548558 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Oct 30 00:02:20.548566 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Oct 30 00:02:20.548573 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Oct 30 00:02:20.548584 kernel: efi: EFI v2.7 by EDK II Oct 30 00:02:20.548592 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Oct 30 00:02:20.548618 kernel: random: crng init done Oct 30 00:02:20.548630 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Oct 30 00:02:20.548639 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Oct 30 00:02:20.548650 kernel: secureboot: Secure boot disabled Oct 30 00:02:20.548659 kernel: SMBIOS 2.8 present. Oct 30 00:02:20.548669 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Oct 30 00:02:20.548677 kernel: DMI: Memory slots populated: 1/1 Oct 30 00:02:20.548685 kernel: Hypervisor detected: KVM Oct 30 00:02:20.548693 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 00:02:20.548701 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 30 00:02:20.548709 kernel: kvm-clock: using sched offset of 5314751161 cycles Oct 30 00:02:20.548729 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 30 00:02:20.548743 kernel: tsc: Detected 2794.748 MHz processor Oct 30 00:02:20.548754 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 30 00:02:20.548765 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 30 00:02:20.548776 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Oct 30 00:02:20.548788 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Oct 30 00:02:20.548797 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 30 00:02:20.548805 kernel: Using GB pages for direct mapping Oct 30 00:02:20.548818 kernel: ACPI: Early table checksum verification disabled Oct 30 00:02:20.548827 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Oct 30 00:02:20.548835 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Oct 30 00:02:20.548844 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548855 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548867 kernel: ACPI: FACS 0x000000009CBDD000 000040 Oct 30 00:02:20.548878 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548893 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548902 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548911 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 30 00:02:20.548919 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Oct 30 00:02:20.548928 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Oct 30 00:02:20.548936 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Oct 30 00:02:20.548946 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Oct 30 00:02:20.548967 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Oct 30 00:02:20.548979 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Oct 30 00:02:20.548989 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Oct 30 00:02:20.548998 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Oct 30 00:02:20.549009 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Oct 30 00:02:20.549020 kernel: No NUMA configuration found Oct 30 00:02:20.549032 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Oct 30 00:02:20.549047 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Oct 30 00:02:20.549059 kernel: Zone ranges: Oct 30 00:02:20.549070 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 30 00:02:20.549081 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Oct 30 00:02:20.549090 kernel: Normal empty Oct 30 00:02:20.549098 kernel: Device empty Oct 30 00:02:20.549107 kernel: Movable zone start for each node Oct 30 00:02:20.549115 kernel: Early memory node ranges Oct 30 00:02:20.549126 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Oct 30 00:02:20.549138 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Oct 30 00:02:20.549147 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Oct 30 00:02:20.549156 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Oct 30 00:02:20.549165 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Oct 30 00:02:20.549174 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Oct 30 00:02:20.549194 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Oct 30 00:02:20.549202 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Oct 30 00:02:20.549216 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Oct 30 00:02:20.549224 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:02:20.549241 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Oct 30 00:02:20.549254 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Oct 30 00:02:20.549266 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 30 00:02:20.549277 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Oct 30 00:02:20.549289 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Oct 30 00:02:20.549301 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 30 00:02:20.549311 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Oct 30 00:02:20.549323 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Oct 30 00:02:20.549332 kernel: ACPI: PM-Timer IO Port: 0x608 Oct 30 00:02:20.549341 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 30 00:02:20.549350 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 30 00:02:20.549361 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Oct 30 00:02:20.549369 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 30 00:02:20.549378 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 30 00:02:20.549387 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 30 00:02:20.549395 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 30 00:02:20.549404 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 30 00:02:20.549413 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Oct 30 00:02:20.549424 kernel: TSC deadline timer available Oct 30 00:02:20.549433 kernel: CPU topo: Max. logical packages: 1 Oct 30 00:02:20.549441 kernel: CPU topo: Max. logical dies: 1 Oct 30 00:02:20.549450 kernel: CPU topo: Max. dies per package: 1 Oct 30 00:02:20.549458 kernel: CPU topo: Max. threads per core: 1 Oct 30 00:02:20.549467 kernel: CPU topo: Num. cores per package: 4 Oct 30 00:02:20.549475 kernel: CPU topo: Num. threads per package: 4 Oct 30 00:02:20.549486 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Oct 30 00:02:20.549495 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Oct 30 00:02:20.549504 kernel: kvm-guest: KVM setup pv remote TLB flush Oct 30 00:02:20.549512 kernel: kvm-guest: setup PV sched yield Oct 30 00:02:20.549521 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Oct 30 00:02:20.549530 kernel: Booting paravirtualized kernel on KVM Oct 30 00:02:20.549540 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 30 00:02:20.549552 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Oct 30 00:02:20.549562 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Oct 30 00:02:20.549570 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Oct 30 00:02:20.549579 kernel: pcpu-alloc: [0] 0 1 2 3 Oct 30 00:02:20.549591 kernel: kvm-guest: PV spinlocks enabled Oct 30 00:02:20.549647 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 30 00:02:20.549660 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:02:20.549674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 30 00:02:20.549683 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 30 00:02:20.549692 kernel: Fallback order for Node 0: 0 Oct 30 00:02:20.549700 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Oct 30 00:02:20.549709 kernel: Policy zone: DMA32 Oct 30 00:02:20.549717 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 30 00:02:20.549726 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 30 00:02:20.549737 kernel: ftrace: allocating 40092 entries in 157 pages Oct 30 00:02:20.549746 kernel: ftrace: allocated 157 pages with 5 groups Oct 30 00:02:20.549754 kernel: Dynamic Preempt: voluntary Oct 30 00:02:20.549763 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 30 00:02:20.549778 kernel: rcu: RCU event tracing is enabled. Oct 30 00:02:20.549787 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 30 00:02:20.549796 kernel: Trampoline variant of Tasks RCU enabled. Oct 30 00:02:20.549808 kernel: Rude variant of Tasks RCU enabled. Oct 30 00:02:20.549816 kernel: Tracing variant of Tasks RCU enabled. Oct 30 00:02:20.549825 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 30 00:02:20.549892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 30 00:02:20.549912 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:02:20.549928 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:02:20.549944 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 30 00:02:20.549962 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Oct 30 00:02:20.549974 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 30 00:02:20.549984 kernel: Console: colour dummy device 80x25 Oct 30 00:02:20.549993 kernel: printk: legacy console [ttyS0] enabled Oct 30 00:02:20.550001 kernel: ACPI: Core revision 20240827 Oct 30 00:02:20.550010 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Oct 30 00:02:20.550021 kernel: APIC: Switch to symmetric I/O mode setup Oct 30 00:02:20.550031 kernel: x2apic enabled Oct 30 00:02:20.550052 kernel: APIC: Switched APIC routing to: physical x2apic Oct 30 00:02:20.550064 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Oct 30 00:02:20.550076 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Oct 30 00:02:20.550087 kernel: kvm-guest: setup PV IPIs Oct 30 00:02:20.550098 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Oct 30 00:02:20.550109 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 00:02:20.550120 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Oct 30 00:02:20.550138 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Oct 30 00:02:20.550150 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Oct 30 00:02:20.550162 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Oct 30 00:02:20.550172 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 30 00:02:20.550192 kernel: Spectre V2 : Mitigation: Retpolines Oct 30 00:02:20.550201 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Oct 30 00:02:20.550210 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Oct 30 00:02:20.550222 kernel: active return thunk: retbleed_return_thunk Oct 30 00:02:20.550230 kernel: RETBleed: Mitigation: untrained return thunk Oct 30 00:02:20.550243 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 30 00:02:20.550252 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Oct 30 00:02:20.550260 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Oct 30 00:02:20.550270 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Oct 30 00:02:20.550281 kernel: active return thunk: srso_return_thunk Oct 30 00:02:20.550290 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Oct 30 00:02:20.550298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 30 00:02:20.550307 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 30 00:02:20.550316 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 30 00:02:20.550324 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 30 00:02:20.550333 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Oct 30 00:02:20.550343 kernel: Freeing SMP alternatives memory: 32K Oct 30 00:02:20.550352 kernel: pid_max: default: 32768 minimum: 301 Oct 30 00:02:20.550361 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Oct 30 00:02:20.550369 kernel: landlock: Up and running. Oct 30 00:02:20.550378 kernel: SELinux: Initializing. Oct 30 00:02:20.550386 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 00:02:20.550395 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 30 00:02:20.550405 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Oct 30 00:02:20.550417 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Oct 30 00:02:20.550427 kernel: ... version: 0 Oct 30 00:02:20.550438 kernel: ... bit width: 48 Oct 30 00:02:20.550449 kernel: ... generic registers: 6 Oct 30 00:02:20.550461 kernel: ... value mask: 0000ffffffffffff Oct 30 00:02:20.550473 kernel: ... max period: 00007fffffffffff Oct 30 00:02:20.550484 kernel: ... fixed-purpose events: 0 Oct 30 00:02:20.550495 kernel: ... event mask: 000000000000003f Oct 30 00:02:20.550503 kernel: signal: max sigframe size: 1776 Oct 30 00:02:20.550512 kernel: rcu: Hierarchical SRCU implementation. Oct 30 00:02:20.550521 kernel: rcu: Max phase no-delay instances is 400. Oct 30 00:02:20.550535 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Oct 30 00:02:20.550545 kernel: smp: Bringing up secondary CPUs ... Oct 30 00:02:20.550555 kernel: smpboot: x86: Booting SMP configuration: Oct 30 00:02:20.550566 kernel: .... node #0, CPUs: #1 #2 #3 Oct 30 00:02:20.550575 kernel: smp: Brought up 1 node, 4 CPUs Oct 30 00:02:20.550583 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Oct 30 00:02:20.550592 kernel: Memory: 2445196K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15956K init, 2088K bss, 114668K reserved, 0K cma-reserved) Oct 30 00:02:20.550617 kernel: devtmpfs: initialized Oct 30 00:02:20.550629 kernel: x86/mm: Memory block size: 128MB Oct 30 00:02:20.550641 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Oct 30 00:02:20.550657 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Oct 30 00:02:20.550667 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Oct 30 00:02:20.550676 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Oct 30 00:02:20.550685 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Oct 30 00:02:20.550695 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Oct 30 00:02:20.550706 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 30 00:02:20.550718 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 30 00:02:20.550734 kernel: pinctrl core: initialized pinctrl subsystem Oct 30 00:02:20.550746 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 30 00:02:20.550758 kernel: audit: initializing netlink subsys (disabled) Oct 30 00:02:20.550770 kernel: audit: type=2000 audit(1761782536.921:1): state=initialized audit_enabled=0 res=1 Oct 30 00:02:20.550782 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 30 00:02:20.550794 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 30 00:02:20.550806 kernel: cpuidle: using governor menu Oct 30 00:02:20.550821 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 30 00:02:20.550833 kernel: dca service started, version 1.12.1 Oct 30 00:02:20.550845 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Oct 30 00:02:20.550857 kernel: PCI: Using configuration type 1 for base access Oct 30 00:02:20.550868 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 30 00:02:20.550880 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 30 00:02:20.550892 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Oct 30 00:02:20.550907 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 30 00:02:20.550919 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Oct 30 00:02:20.550931 kernel: ACPI: Added _OSI(Module Device) Oct 30 00:02:20.550942 kernel: ACPI: Added _OSI(Processor Device) Oct 30 00:02:20.550954 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 30 00:02:20.550966 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 30 00:02:20.550977 kernel: ACPI: Interpreter enabled Oct 30 00:02:20.550991 kernel: ACPI: PM: (supports S0 S3 S5) Oct 30 00:02:20.551003 kernel: ACPI: Using IOAPIC for interrupt routing Oct 30 00:02:20.551015 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 30 00:02:20.551026 kernel: PCI: Using E820 reservations for host bridge windows Oct 30 00:02:20.551038 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Oct 30 00:02:20.551050 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 30 00:02:20.551445 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 30 00:02:20.551864 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Oct 30 00:02:20.552088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Oct 30 00:02:20.552105 kernel: PCI host bridge to bus 0000:00 Oct 30 00:02:20.552340 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 30 00:02:20.552523 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 30 00:02:20.552727 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 30 00:02:20.552908 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Oct 30 00:02:20.553071 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Oct 30 00:02:20.553243 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Oct 30 00:02:20.553409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 30 00:02:20.553687 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Oct 30 00:02:20.553936 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Oct 30 00:02:20.554116 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Oct 30 00:02:20.554359 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Oct 30 00:02:20.554562 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Oct 30 00:02:20.554764 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Oct 30 00:02:20.554974 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Oct 30 00:02:20.555173 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Oct 30 00:02:20.555418 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Oct 30 00:02:20.555652 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Oct 30 00:02:20.555864 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Oct 30 00:02:20.556052 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Oct 30 00:02:20.556253 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Oct 30 00:02:20.556467 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Oct 30 00:02:20.556726 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Oct 30 00:02:20.556908 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Oct 30 00:02:20.557082 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Oct 30 00:02:20.557288 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Oct 30 00:02:20.557486 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Oct 30 00:02:20.557697 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Oct 30 00:02:20.557910 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Oct 30 00:02:20.558105 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Oct 30 00:02:20.558318 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Oct 30 00:02:20.558594 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Oct 30 00:02:20.558821 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Oct 30 00:02:20.558998 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Oct 30 00:02:20.559010 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 30 00:02:20.559019 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 30 00:02:20.559028 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 30 00:02:20.559042 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 30 00:02:20.559051 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Oct 30 00:02:20.559060 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Oct 30 00:02:20.559068 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Oct 30 00:02:20.559077 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Oct 30 00:02:20.559085 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Oct 30 00:02:20.559094 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Oct 30 00:02:20.559105 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Oct 30 00:02:20.559113 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Oct 30 00:02:20.559122 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Oct 30 00:02:20.559130 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Oct 30 00:02:20.559139 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Oct 30 00:02:20.559147 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Oct 30 00:02:20.559159 kernel: iommu: Default domain type: Translated Oct 30 00:02:20.559169 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 30 00:02:20.559193 kernel: efivars: Registered efivars operations Oct 30 00:02:20.559204 kernel: PCI: Using ACPI for IRQ routing Oct 30 00:02:20.559215 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 30 00:02:20.559226 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Oct 30 00:02:20.559237 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Oct 30 00:02:20.559248 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Oct 30 00:02:20.559259 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Oct 30 00:02:20.559275 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Oct 30 00:02:20.559285 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Oct 30 00:02:20.559294 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Oct 30 00:02:20.559303 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Oct 30 00:02:20.559519 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Oct 30 00:02:20.559717 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Oct 30 00:02:20.559896 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Oct 30 00:02:20.559907 kernel: vgaarb: loaded Oct 30 00:02:20.559916 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Oct 30 00:02:20.559926 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Oct 30 00:02:20.559935 kernel: clocksource: Switched to clocksource kvm-clock Oct 30 00:02:20.559943 kernel: VFS: Disk quotas dquot_6.6.0 Oct 30 00:02:20.559952 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 30 00:02:20.559964 kernel: pnp: PnP ACPI init Oct 30 00:02:20.560205 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Oct 30 00:02:20.560223 kernel: pnp: PnP ACPI: found 6 devices Oct 30 00:02:20.560232 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 30 00:02:20.560241 kernel: NET: Registered PF_INET protocol family Oct 30 00:02:20.560251 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 30 00:02:20.560260 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 30 00:02:20.560346 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 30 00:02:20.560355 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 30 00:02:20.560364 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 30 00:02:20.560373 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 30 00:02:20.560382 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 00:02:20.560391 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 30 00:02:20.560400 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 30 00:02:20.560417 kernel: NET: Registered PF_XDP protocol family Oct 30 00:02:20.560596 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Oct 30 00:02:20.560791 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Oct 30 00:02:20.560967 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 30 00:02:20.561192 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 30 00:02:20.561359 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 30 00:02:20.561534 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Oct 30 00:02:20.561718 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Oct 30 00:02:20.561909 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Oct 30 00:02:20.561929 kernel: PCI: CLS 0 bytes, default 64 Oct 30 00:02:20.561942 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Oct 30 00:02:20.561967 kernel: Initialise system trusted keyrings Oct 30 00:02:20.561977 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 30 00:02:20.561986 kernel: Key type asymmetric registered Oct 30 00:02:20.561994 kernel: Asymmetric key parser 'x509' registered Oct 30 00:02:20.562004 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 30 00:02:20.562013 kernel: io scheduler mq-deadline registered Oct 30 00:02:20.562030 kernel: io scheduler kyber registered Oct 30 00:02:20.562039 kernel: io scheduler bfq registered Oct 30 00:02:20.562048 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 30 00:02:20.562058 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Oct 30 00:02:20.562067 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Oct 30 00:02:20.562077 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Oct 30 00:02:20.562086 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 30 00:02:20.562104 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 30 00:02:20.562114 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 30 00:02:20.562123 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 30 00:02:20.562132 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 30 00:02:20.562338 kernel: rtc_cmos 00:04: RTC can wake from S4 Oct 30 00:02:20.562352 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 30 00:02:20.562517 kernel: rtc_cmos 00:04: registered as rtc0 Oct 30 00:02:20.562724 kernel: rtc_cmos 00:04: setting system clock to 2025-10-30T00:02:18 UTC (1761782538) Oct 30 00:02:20.562896 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Oct 30 00:02:20.562908 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Oct 30 00:02:20.562918 kernel: efifb: probing for efifb Oct 30 00:02:20.562927 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Oct 30 00:02:20.562936 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Oct 30 00:02:20.562945 kernel: efifb: scrolling: redraw Oct 30 00:02:20.562967 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Oct 30 00:02:20.562976 kernel: Console: switching to colour frame buffer device 160x50 Oct 30 00:02:20.562985 kernel: fb0: EFI VGA frame buffer device Oct 30 00:02:20.562994 kernel: pstore: Using crash dump compression: deflate Oct 30 00:02:20.563003 kernel: pstore: Registered efi_pstore as persistent store backend Oct 30 00:02:20.563012 kernel: NET: Registered PF_INET6 protocol family Oct 30 00:02:20.563021 kernel: Segment Routing with IPv6 Oct 30 00:02:20.563037 kernel: In-situ OAM (IOAM) with IPv6 Oct 30 00:02:20.563046 kernel: NET: Registered PF_PACKET protocol family Oct 30 00:02:20.563055 kernel: Key type dns_resolver registered Oct 30 00:02:20.563064 kernel: IPI shorthand broadcast: enabled Oct 30 00:02:20.563073 kernel: sched_clock: Marking stable (1630004628, 298907666)->(2014680275, -85767981) Oct 30 00:02:20.563082 kernel: registered taskstats version 1 Oct 30 00:02:20.563091 kernel: Loading compiled-in X.509 certificates Oct 30 00:02:20.563106 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: b5a3367ee15a1313a0db8339b653e9e56c1bb8d0' Oct 30 00:02:20.563115 kernel: Demotion targets for Node 0: null Oct 30 00:02:20.563124 kernel: Key type .fscrypt registered Oct 30 00:02:20.563133 kernel: Key type fscrypt-provisioning registered Oct 30 00:02:20.563141 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 30 00:02:20.563150 kernel: ima: Allocated hash algorithm: sha1 Oct 30 00:02:20.563159 kernel: ima: No architecture policies found Oct 30 00:02:20.563175 kernel: clk: Disabling unused clocks Oct 30 00:02:20.563194 kernel: Freeing unused kernel image (initmem) memory: 15956K Oct 30 00:02:20.563203 kernel: Write protecting the kernel read-only data: 40960k Oct 30 00:02:20.563212 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Oct 30 00:02:20.563221 kernel: Run /init as init process Oct 30 00:02:20.563230 kernel: with arguments: Oct 30 00:02:20.563239 kernel: /init Oct 30 00:02:20.563247 kernel: with environment: Oct 30 00:02:20.563263 kernel: HOME=/ Oct 30 00:02:20.563272 kernel: TERM=linux Oct 30 00:02:20.563281 kernel: SCSI subsystem initialized Oct 30 00:02:20.563289 kernel: libata version 3.00 loaded. Oct 30 00:02:20.563472 kernel: ahci 0000:00:1f.2: version 3.0 Oct 30 00:02:20.563485 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Oct 30 00:02:20.563720 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Oct 30 00:02:20.563988 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Oct 30 00:02:20.564166 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Oct 30 00:02:20.564423 kernel: scsi host0: ahci Oct 30 00:02:20.564635 kernel: scsi host1: ahci Oct 30 00:02:20.564866 kernel: scsi host2: ahci Oct 30 00:02:20.565101 kernel: scsi host3: ahci Oct 30 00:02:20.565342 kernel: scsi host4: ahci Oct 30 00:02:20.565534 kernel: scsi host5: ahci Oct 30 00:02:20.565548 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Oct 30 00:02:20.565565 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Oct 30 00:02:20.565576 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Oct 30 00:02:20.565739 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Oct 30 00:02:20.565752 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Oct 30 00:02:20.565763 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Oct 30 00:02:20.565775 kernel: ata6: SATA link down (SStatus 0 SControl 300) Oct 30 00:02:20.565786 kernel: ata2: SATA link down (SStatus 0 SControl 300) Oct 30 00:02:20.565816 kernel: ata4: SATA link down (SStatus 0 SControl 300) Oct 30 00:02:20.565831 kernel: ata5: SATA link down (SStatus 0 SControl 300) Oct 30 00:02:20.565859 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Oct 30 00:02:20.565871 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 00:02:20.565882 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Oct 30 00:02:20.565894 kernel: ata3.00: applying bridge limits Oct 30 00:02:20.565916 kernel: ata1: SATA link down (SStatus 0 SControl 300) Oct 30 00:02:20.565940 kernel: ata3.00: LPM support broken, forcing max_power Oct 30 00:02:20.565970 kernel: ata3.00: configured for UDMA/100 Oct 30 00:02:20.566276 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Oct 30 00:02:20.566554 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Oct 30 00:02:20.566784 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Oct 30 00:02:20.566804 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 30 00:02:20.566816 kernel: GPT:16515071 != 27000831 Oct 30 00:02:20.566825 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 30 00:02:20.566850 kernel: GPT:16515071 != 27000831 Oct 30 00:02:20.566862 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 30 00:02:20.566874 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 30 00:02:20.567095 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Oct 30 00:02:20.567109 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Oct 30 00:02:20.567362 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Oct 30 00:02:20.567383 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 30 00:02:20.567411 kernel: device-mapper: uevent: version 1.0.3 Oct 30 00:02:20.567424 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Oct 30 00:02:20.567437 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Oct 30 00:02:20.567448 kernel: raid6: avx2x4 gen() 28801 MB/s Oct 30 00:02:20.567461 kernel: raid6: avx2x2 gen() 30458 MB/s Oct 30 00:02:20.567471 kernel: raid6: avx2x1 gen() 25142 MB/s Oct 30 00:02:20.567480 kernel: raid6: using algorithm avx2x2 gen() 30458 MB/s Oct 30 00:02:20.567498 kernel: raid6: .... xor() 19218 MB/s, rmw enabled Oct 30 00:02:20.567508 kernel: raid6: using avx2x2 recovery algorithm Oct 30 00:02:20.567517 kernel: xor: automatically using best checksumming function avx Oct 30 00:02:20.567526 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 30 00:02:20.567535 kernel: BTRFS: device fsid 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (181) Oct 30 00:02:20.567545 kernel: BTRFS info (device dm-0): first mount of filesystem 6b7350c1-23d8-4ac8-84c6-3e4efb0085fe Oct 30 00:02:20.567553 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:02:20.567569 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 30 00:02:20.567578 kernel: BTRFS info (device dm-0): enabling free space tree Oct 30 00:02:20.567587 kernel: loop: module loaded Oct 30 00:02:20.567596 kernel: loop0: detected capacity change from 0 to 100120 Oct 30 00:02:20.567630 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 30 00:02:20.567646 systemd[1]: Successfully made /usr/ read-only. Oct 30 00:02:20.567680 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:02:20.567691 systemd[1]: Detected virtualization kvm. Oct 30 00:02:20.567700 systemd[1]: Detected architecture x86-64. Oct 30 00:02:20.567709 systemd[1]: Running in initrd. Oct 30 00:02:20.567718 systemd[1]: No hostname configured, using default hostname. Oct 30 00:02:20.567728 systemd[1]: Hostname set to . Oct 30 00:02:20.567745 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 00:02:20.567755 systemd[1]: Queued start job for default target initrd.target. Oct 30 00:02:20.567764 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:02:20.567774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:02:20.567784 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:02:20.567794 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 30 00:02:20.567804 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:02:20.567821 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 30 00:02:20.567831 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 30 00:02:20.567841 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:02:20.567850 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:02:20.567859 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:02:20.567878 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:02:20.567887 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:02:20.567897 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:02:20.567906 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:02:20.567915 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:02:20.567925 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:02:20.567934 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 30 00:02:20.567951 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 30 00:02:20.567960 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:02:20.567970 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:02:20.567979 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:02:20.567989 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:02:20.567999 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 30 00:02:20.568008 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 30 00:02:20.568025 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:02:20.568035 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 30 00:02:20.568045 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Oct 30 00:02:20.568055 systemd[1]: Starting systemd-fsck-usr.service... Oct 30 00:02:20.568064 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:02:20.568074 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:02:20.568084 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:02:20.568101 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 30 00:02:20.568111 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:02:20.568121 systemd[1]: Finished systemd-fsck-usr.service. Oct 30 00:02:20.568137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 30 00:02:20.568194 systemd-journald[314]: Collecting audit messages is disabled. Oct 30 00:02:20.568218 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:20.568238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 30 00:02:20.568248 systemd-journald[314]: Journal started Oct 30 00:02:20.568268 systemd-journald[314]: Runtime Journal (/run/log/journal/733638506a994000aec25a3ed220905c) is 6M, max 48.1M, 42.1M free. Oct 30 00:02:20.584783 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 30 00:02:20.584837 kernel: Bridge firewalling registered Oct 30 00:02:20.577996 systemd-modules-load[317]: Inserted module 'br_netfilter' Oct 30 00:02:20.588290 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 30 00:02:20.590853 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:02:20.591715 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:02:20.596022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:02:20.597157 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:02:20.601834 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:02:20.617032 systemd-tmpfiles[341]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Oct 30 00:02:20.620506 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:02:20.622163 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:02:20.628871 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 30 00:02:20.632208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:02:20.635683 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:02:20.646047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:02:20.669652 dracut-cmdline[355]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=56cc5d11e9ee9e328725323e5b298567de51aff19ad0756381062170c9c03796 Oct 30 00:02:20.715871 systemd-resolved[358]: Positive Trust Anchors: Oct 30 00:02:20.715891 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:02:20.715896 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 00:02:20.715937 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:02:20.771925 systemd-resolved[358]: Defaulting to hostname 'linux'. Oct 30 00:02:20.773931 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:02:20.776842 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:02:20.862704 kernel: Loading iSCSI transport class v2.0-870. Oct 30 00:02:20.878650 kernel: iscsi: registered transport (tcp) Oct 30 00:02:20.906989 kernel: iscsi: registered transport (qla4xxx) Oct 30 00:02:20.907065 kernel: QLogic iSCSI HBA Driver Oct 30 00:02:20.938962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:02:20.971322 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:02:20.974110 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:02:21.039206 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 30 00:02:21.042267 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 30 00:02:21.046322 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 30 00:02:21.079000 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:02:21.084215 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:02:21.122956 systemd-udevd[599]: Using default interface naming scheme 'v257'. Oct 30 00:02:21.139817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:02:21.146350 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 30 00:02:21.173724 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:02:21.181289 dracut-pre-trigger[674]: rd.md=0: removing MD RAID activation Oct 30 00:02:21.182303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:02:21.213057 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:02:21.216114 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:02:21.241273 systemd-networkd[709]: lo: Link UP Oct 30 00:02:21.241285 systemd-networkd[709]: lo: Gained carrier Oct 30 00:02:21.242004 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:02:21.244515 systemd[1]: Reached target network.target - Network. Oct 30 00:02:21.318087 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:02:21.323778 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 30 00:02:21.376140 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 30 00:02:21.394781 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 30 00:02:21.421637 kernel: cryptd: max_cpu_qlen set to 1000 Oct 30 00:02:21.433863 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 30 00:02:21.450182 kernel: AES CTR mode by8 optimization enabled Oct 30 00:02:21.452221 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:02:21.457714 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Oct 30 00:02:21.470695 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:02:21.470719 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 30 00:02:21.472093 systemd-networkd[709]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:02:21.472583 systemd-networkd[709]: eth0: Link UP Oct 30 00:02:21.473563 systemd-networkd[709]: eth0: Gained carrier Oct 30 00:02:21.473573 systemd-networkd[709]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:02:21.487481 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:02:21.488112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:21.488881 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:02:21.498670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:02:21.506803 systemd-networkd[709]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 00:02:21.511693 disk-uuid[834]: Primary Header is updated. Oct 30 00:02:21.511693 disk-uuid[834]: Secondary Entries is updated. Oct 30 00:02:21.511693 disk-uuid[834]: Secondary Header is updated. Oct 30 00:02:21.541929 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:21.564139 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 30 00:02:21.565572 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:02:21.566487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:02:21.567078 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:02:21.584660 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 30 00:02:21.619627 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:02:22.563065 disk-uuid[838]: Warning: The kernel is still using the old partition table. Oct 30 00:02:22.563065 disk-uuid[838]: The new table will be used at the next reboot or after you Oct 30 00:02:22.563065 disk-uuid[838]: run partprobe(8) or kpartx(8) Oct 30 00:02:22.563065 disk-uuid[838]: The operation has completed successfully. Oct 30 00:02:22.569496 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 30 00:02:22.569691 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 30 00:02:22.574983 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 30 00:02:22.623889 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (864) Oct 30 00:02:22.623959 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:02:22.624014 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:02:22.629647 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:02:22.629687 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:02:22.638657 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:02:22.640253 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 30 00:02:22.643503 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 30 00:02:22.775590 ignition[883]: Ignition 2.22.0 Oct 30 00:02:22.775621 ignition[883]: Stage: fetch-offline Oct 30 00:02:22.775665 ignition[883]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:22.775678 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:22.775764 ignition[883]: parsed url from cmdline: "" Oct 30 00:02:22.775768 ignition[883]: no config URL provided Oct 30 00:02:22.775772 ignition[883]: reading system config file "/usr/lib/ignition/user.ign" Oct 30 00:02:22.775784 ignition[883]: no config at "/usr/lib/ignition/user.ign" Oct 30 00:02:22.775830 ignition[883]: op(1): [started] loading QEMU firmware config module Oct 30 00:02:22.775835 ignition[883]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 30 00:02:22.787413 ignition[883]: op(1): [finished] loading QEMU firmware config module Oct 30 00:02:22.872966 ignition[883]: parsing config with SHA512: c90f083e99672b6ed59f90e3e04611213e8ae4d5a4b7775d77f2bce3d56e467b8c8ea3b12b606ac8963ccb0919928c4d6c49b27c22d7744b9925de18f127502e Oct 30 00:02:22.880567 unknown[883]: fetched base config from "system" Oct 30 00:02:22.880594 unknown[883]: fetched user config from "qemu" Oct 30 00:02:22.881096 ignition[883]: fetch-offline: fetch-offline passed Oct 30 00:02:22.881178 ignition[883]: Ignition finished successfully Oct 30 00:02:22.885337 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:02:22.888465 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 30 00:02:22.889722 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 30 00:02:22.943544 ignition[897]: Ignition 2.22.0 Oct 30 00:02:22.943562 ignition[897]: Stage: kargs Oct 30 00:02:22.943779 ignition[897]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:22.943793 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:22.944798 ignition[897]: kargs: kargs passed Oct 30 00:02:22.951358 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 30 00:02:22.944862 ignition[897]: Ignition finished successfully Oct 30 00:02:22.954529 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 30 00:02:22.997158 ignition[905]: Ignition 2.22.0 Oct 30 00:02:22.997181 ignition[905]: Stage: disks Oct 30 00:02:22.997360 ignition[905]: no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:22.997375 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:23.001411 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 30 00:02:22.998247 ignition[905]: disks: disks passed Oct 30 00:02:23.003719 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 30 00:02:22.998308 ignition[905]: Ignition finished successfully Oct 30 00:02:23.007176 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 30 00:02:23.010809 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:02:23.012686 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:02:23.015448 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:02:23.018393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 30 00:02:23.072656 systemd-fsck[915]: ROOT: clean, 15/456736 files, 38230/456704 blocks Oct 30 00:02:23.081144 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 30 00:02:23.087387 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 30 00:02:23.260661 kernel: EXT4-fs (vda9): mounted filesystem 357f8fb5-672c-465c-a10c-74ee57b7ef1c r/w with ordered data mode. Quota mode: none. Oct 30 00:02:23.261739 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 30 00:02:23.263204 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 30 00:02:23.268795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:02:23.271384 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 30 00:02:23.273147 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 30 00:02:23.273223 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 30 00:02:23.273274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:02:23.295544 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 30 00:02:23.299078 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 30 00:02:23.306467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (923) Oct 30 00:02:23.310639 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:02:23.310689 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:02:23.318172 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:02:23.318238 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:02:23.320434 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:02:23.382967 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Oct 30 00:02:23.390497 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Oct 30 00:02:23.397127 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Oct 30 00:02:23.402173 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Oct 30 00:02:23.419850 systemd-networkd[709]: eth0: Gained IPv6LL Oct 30 00:02:23.729591 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 30 00:02:23.734190 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 30 00:02:23.739216 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 30 00:02:23.766849 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 30 00:02:23.769359 kernel: BTRFS info (device vda6): last unmount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:02:23.787788 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 30 00:02:23.840540 ignition[1037]: INFO : Ignition 2.22.0 Oct 30 00:02:23.840540 ignition[1037]: INFO : Stage: mount Oct 30 00:02:23.843861 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:23.843861 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:23.843861 ignition[1037]: INFO : mount: mount passed Oct 30 00:02:23.843861 ignition[1037]: INFO : Ignition finished successfully Oct 30 00:02:23.845485 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 30 00:02:23.848959 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 30 00:02:23.878073 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 30 00:02:23.926064 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1049) Oct 30 00:02:23.926146 kernel: BTRFS info (device vda6): first mount of filesystem 03993d8b-786f-4e51-be25-d341ee6662e9 Oct 30 00:02:23.926181 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Oct 30 00:02:23.932143 kernel: BTRFS info (device vda6): turning on async discard Oct 30 00:02:23.932196 kernel: BTRFS info (device vda6): enabling free space tree Oct 30 00:02:23.934419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 30 00:02:24.071902 ignition[1066]: INFO : Ignition 2.22.0 Oct 30 00:02:24.071902 ignition[1066]: INFO : Stage: files Oct 30 00:02:24.081654 ignition[1066]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:24.081654 ignition[1066]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:24.081654 ignition[1066]: DEBUG : files: compiled without relabeling support, skipping Oct 30 00:02:24.089417 ignition[1066]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 30 00:02:24.089417 ignition[1066]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 30 00:02:24.096508 ignition[1066]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 30 00:02:24.099320 ignition[1066]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 30 00:02:24.102829 unknown[1066]: wrote ssh authorized keys file for user: core Oct 30 00:02:24.105141 ignition[1066]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 30 00:02:24.108241 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:02:24.108241 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Oct 30 00:02:24.173294 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 30 00:02:24.352544 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Oct 30 00:02:24.352544 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:02:24.359811 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 30 00:02:24.380847 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Oct 30 00:02:24.696013 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Oct 30 00:02:25.886759 ignition[1066]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Oct 30 00:02:25.886759 ignition[1066]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Oct 30 00:02:25.893297 ignition[1066]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:02:25.901162 ignition[1066]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 30 00:02:25.901162 ignition[1066]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Oct 30 00:02:25.901162 ignition[1066]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 30 00:02:25.909227 ignition[1066]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 00:02:25.909227 ignition[1066]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 30 00:02:25.909227 ignition[1066]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 30 00:02:25.909227 ignition[1066]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Oct 30 00:02:25.964255 ignition[1066]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 00:02:25.975466 ignition[1066]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 30 00:02:25.978448 ignition[1066]: INFO : files: files passed Oct 30 00:02:25.978448 ignition[1066]: INFO : Ignition finished successfully Oct 30 00:02:25.995498 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 30 00:02:26.001056 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 30 00:02:26.004252 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 30 00:02:26.040052 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 30 00:02:26.040250 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 30 00:02:26.047840 initrd-setup-root-after-ignition[1097]: grep: /sysroot/oem/oem-release: No such file or directory Oct 30 00:02:26.053391 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:02:26.056654 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:02:26.056594 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:02:26.066464 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 30 00:02:26.057792 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 30 00:02:26.059469 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 30 00:02:26.133575 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 30 00:02:26.133814 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 30 00:02:26.137708 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 30 00:02:26.138420 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 30 00:02:26.143791 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 30 00:02:26.144939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 30 00:02:26.194423 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:02:26.196805 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 30 00:02:26.235898 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Oct 30 00:02:26.236226 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:02:26.237675 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:02:26.238512 systemd[1]: Stopped target timers.target - Timer Units. Oct 30 00:02:26.246753 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 30 00:02:26.246919 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 30 00:02:26.251863 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 30 00:02:26.253164 systemd[1]: Stopped target basic.target - Basic System. Oct 30 00:02:26.257512 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 30 00:02:26.263673 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 30 00:02:26.264565 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 30 00:02:26.269815 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Oct 30 00:02:26.270820 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 30 00:02:26.271386 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 30 00:02:26.272307 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 30 00:02:26.273271 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 30 00:02:26.274204 systemd[1]: Stopped target swap.target - Swaps. Oct 30 00:02:26.292037 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 30 00:02:26.292272 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 30 00:02:26.297028 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:02:26.298047 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:02:26.301125 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 30 00:02:26.301267 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:02:26.305157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 30 00:02:26.305344 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 30 00:02:26.312038 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 30 00:02:26.312180 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 30 00:02:26.316001 systemd[1]: Stopped target paths.target - Path Units. Oct 30 00:02:26.317287 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 30 00:02:26.320706 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:02:26.321679 systemd[1]: Stopped target slices.target - Slice Units. Oct 30 00:02:26.322210 systemd[1]: Stopped target sockets.target - Socket Units. Oct 30 00:02:26.328375 systemd[1]: iscsid.socket: Deactivated successfully. Oct 30 00:02:26.328544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 30 00:02:26.332105 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 30 00:02:26.332239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 30 00:02:26.336235 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 30 00:02:26.336429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 30 00:02:26.339195 systemd[1]: ignition-files.service: Deactivated successfully. Oct 30 00:02:26.339349 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 30 00:02:26.347398 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 30 00:02:26.348445 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 30 00:02:26.348672 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:02:26.350682 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 30 00:02:26.359134 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 30 00:02:26.359538 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:02:26.360439 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 30 00:02:26.360637 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:02:26.365307 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 30 00:02:26.365434 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 30 00:02:26.386554 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 30 00:02:26.386769 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 30 00:02:26.419466 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 30 00:02:26.447333 ignition[1123]: INFO : Ignition 2.22.0 Oct 30 00:02:26.447333 ignition[1123]: INFO : Stage: umount Oct 30 00:02:26.451956 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 30 00:02:26.451956 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 30 00:02:26.451956 ignition[1123]: INFO : umount: umount passed Oct 30 00:02:26.451956 ignition[1123]: INFO : Ignition finished successfully Oct 30 00:02:26.460208 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 30 00:02:26.460407 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 30 00:02:26.461768 systemd[1]: Stopped target network.target - Network. Oct 30 00:02:26.462341 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 30 00:02:26.462432 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 30 00:02:26.469597 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 30 00:02:26.469714 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 30 00:02:26.473642 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 30 00:02:26.473716 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 30 00:02:26.474281 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 30 00:02:26.474342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 30 00:02:26.480315 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 30 00:02:26.481233 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 30 00:02:26.501292 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 30 00:02:26.502868 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 30 00:02:26.505119 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 30 00:02:26.505304 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 30 00:02:26.511510 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 30 00:02:26.511704 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 30 00:02:26.519353 systemd[1]: Stopped target network-pre.target - Preparation for Network. Oct 30 00:02:26.520161 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 30 00:02:26.520252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:02:26.521041 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 30 00:02:26.521151 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 30 00:02:26.523039 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 30 00:02:26.530443 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 30 00:02:26.530572 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 30 00:02:26.531461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 30 00:02:26.531558 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:02:26.538242 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 30 00:02:26.538308 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 30 00:02:26.539404 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:02:26.566327 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 30 00:02:26.566585 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:02:26.569510 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 30 00:02:26.569584 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 30 00:02:26.574151 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 30 00:02:26.574213 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:02:26.575060 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 30 00:02:26.575159 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 30 00:02:26.581707 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 30 00:02:26.581821 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 30 00:02:26.583123 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 30 00:02:26.583233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 30 00:02:26.601755 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 30 00:02:26.603736 systemd[1]: systemd-network-generator.service: Deactivated successfully. Oct 30 00:02:26.603841 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:02:26.606489 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 30 00:02:26.606551 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:02:26.607364 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:02:26.607422 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:26.620175 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 30 00:02:26.627905 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 30 00:02:26.640019 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 30 00:02:26.640201 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 30 00:02:26.641881 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 30 00:02:26.650054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 30 00:02:26.693084 systemd[1]: Switching root. Oct 30 00:02:26.730725 systemd-journald[314]: Journal stopped Oct 30 00:02:28.367371 systemd-journald[314]: Received SIGTERM from PID 1 (systemd). Oct 30 00:02:28.367454 kernel: SELinux: policy capability network_peer_controls=1 Oct 30 00:02:28.367481 kernel: SELinux: policy capability open_perms=1 Oct 30 00:02:28.367498 kernel: SELinux: policy capability extended_socket_class=1 Oct 30 00:02:28.367515 kernel: SELinux: policy capability always_check_network=0 Oct 30 00:02:28.367528 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 30 00:02:28.367540 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 30 00:02:28.367552 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 30 00:02:28.367576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 30 00:02:28.367588 kernel: SELinux: policy capability userspace_initial_context=0 Oct 30 00:02:28.367692 kernel: audit: type=1403 audit(1761782547.291:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 30 00:02:28.367706 systemd[1]: Successfully loaded SELinux policy in 73.911ms. Oct 30 00:02:28.367728 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.175ms. Oct 30 00:02:28.367751 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 30 00:02:28.367765 systemd[1]: Detected virtualization kvm. Oct 30 00:02:28.367882 systemd[1]: Detected architecture x86-64. Oct 30 00:02:28.367896 systemd[1]: Detected first boot. Oct 30 00:02:28.367909 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Oct 30 00:02:28.367927 zram_generator::config[1168]: No configuration found. Oct 30 00:02:28.367942 kernel: Guest personality initialized and is inactive Oct 30 00:02:28.367964 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Oct 30 00:02:28.367976 kernel: Initialized host personality Oct 30 00:02:28.367997 kernel: NET: Registered PF_VSOCK protocol family Oct 30 00:02:28.368010 systemd[1]: Populated /etc with preset unit settings. Oct 30 00:02:28.368023 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 30 00:02:28.368039 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 30 00:02:28.368059 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 30 00:02:28.368073 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 30 00:02:28.368088 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 30 00:02:28.368109 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 30 00:02:28.368122 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 30 00:02:28.368135 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 30 00:02:28.368148 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 30 00:02:28.368161 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 30 00:02:28.368174 systemd[1]: Created slice user.slice - User and Session Slice. Oct 30 00:02:28.368187 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 30 00:02:28.368208 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 30 00:02:28.368222 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 30 00:02:28.368235 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 30 00:02:28.368251 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 30 00:02:28.368266 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 30 00:02:28.368281 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Oct 30 00:02:28.368302 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 30 00:02:28.368316 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 30 00:02:28.368329 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 30 00:02:28.368342 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 30 00:02:28.368355 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 30 00:02:28.368368 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 30 00:02:28.368381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 30 00:02:28.368401 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 30 00:02:28.368414 systemd[1]: Reached target slices.target - Slice Units. Oct 30 00:02:28.368427 systemd[1]: Reached target swap.target - Swaps. Oct 30 00:02:28.368441 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 30 00:02:28.368454 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 30 00:02:28.368470 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 30 00:02:28.368483 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 30 00:02:28.368497 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 30 00:02:28.368523 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 30 00:02:28.368536 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 30 00:02:28.368552 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 30 00:02:28.368565 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 30 00:02:28.368578 systemd[1]: Mounting media.mount - External Media Directory... Oct 30 00:02:28.368591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:28.368617 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 30 00:02:28.368639 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 30 00:02:28.368653 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 30 00:02:28.368666 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 30 00:02:28.368680 systemd[1]: Reached target machines.target - Containers. Oct 30 00:02:28.368693 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 30 00:02:28.368706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:02:28.368727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 30 00:02:28.368740 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 30 00:02:28.368753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:02:28.368766 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:02:28.368782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:02:28.368795 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 30 00:02:28.368807 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:02:28.368830 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 30 00:02:28.368849 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 30 00:02:28.368862 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 30 00:02:28.368874 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 30 00:02:28.368887 systemd[1]: Stopped systemd-fsck-usr.service. Oct 30 00:02:28.368901 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:02:28.368914 kernel: fuse: init (API version 7.41) Oct 30 00:02:28.368935 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 30 00:02:28.368948 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 30 00:02:28.368968 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 30 00:02:28.368981 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 30 00:02:28.369024 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 30 00:02:28.369039 kernel: ACPI: bus type drm_connector registered Oct 30 00:02:28.369052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 30 00:02:28.369066 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:28.369079 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 30 00:02:28.369092 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 30 00:02:28.369123 systemd[1]: Mounted media.mount - External Media Directory. Oct 30 00:02:28.369148 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 30 00:02:28.369184 systemd-journald[1246]: Collecting audit messages is disabled. Oct 30 00:02:28.369232 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 30 00:02:28.369261 systemd-journald[1246]: Journal started Oct 30 00:02:28.369302 systemd-journald[1246]: Runtime Journal (/run/log/journal/733638506a994000aec25a3ed220905c) is 6M, max 48.1M, 42.1M free. Oct 30 00:02:28.006472 systemd[1]: Queued start job for default target multi-user.target. Oct 30 00:02:28.021307 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 30 00:02:28.022085 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 30 00:02:28.371942 systemd[1]: Started systemd-journald.service - Journal Service. Oct 30 00:02:28.374554 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 30 00:02:28.376833 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 30 00:02:28.379427 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 30 00:02:28.382156 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 30 00:02:28.382396 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 30 00:02:28.385147 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:02:28.385433 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:02:28.388164 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:02:28.388511 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:02:28.391029 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:02:28.391349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:02:28.394187 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 30 00:02:28.394474 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 30 00:02:28.397026 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:02:28.397321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:02:28.399986 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 30 00:02:28.402969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 30 00:02:28.407678 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 30 00:02:28.411300 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 30 00:02:28.435440 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 30 00:02:28.438787 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Oct 30 00:02:28.443813 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 30 00:02:28.447694 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 30 00:02:28.450044 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 30 00:02:28.450092 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 30 00:02:28.453741 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 30 00:02:28.456426 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:02:28.459539 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 30 00:02:28.464071 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 30 00:02:28.466436 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:02:28.471784 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 30 00:02:28.495198 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:02:28.499192 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 30 00:02:28.504059 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 30 00:02:28.512113 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 30 00:02:28.517597 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 30 00:02:28.521358 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 30 00:02:28.526978 systemd-journald[1246]: Time spent on flushing to /var/log/journal/733638506a994000aec25a3ed220905c is 33.656ms for 1051 entries. Oct 30 00:02:28.526978 systemd-journald[1246]: System Journal (/var/log/journal/733638506a994000aec25a3ed220905c) is 8M, max 163.5M, 155.5M free. Oct 30 00:02:28.575535 systemd-journald[1246]: Received client request to flush runtime journal. Oct 30 00:02:28.576316 kernel: loop1: detected capacity change from 0 to 219144 Oct 30 00:02:28.527676 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 30 00:02:28.535223 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 30 00:02:28.542920 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 30 00:02:28.555941 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 30 00:02:28.559796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 30 00:02:28.579191 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 30 00:02:28.581914 kernel: loop2: detected capacity change from 0 to 128048 Oct 30 00:02:28.594190 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 30 00:02:28.596690 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 30 00:02:28.601916 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 30 00:02:28.604866 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 30 00:02:28.615658 kernel: loop3: detected capacity change from 0 to 110976 Oct 30 00:02:28.621758 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 30 00:02:28.637807 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 30 00:02:28.637824 systemd-tmpfiles[1305]: ACLs are not supported, ignoring. Oct 30 00:02:28.643264 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 30 00:02:28.650653 kernel: loop4: detected capacity change from 0 to 219144 Oct 30 00:02:28.659683 kernel: loop5: detected capacity change from 0 to 128048 Oct 30 00:02:28.670639 kernel: loop6: detected capacity change from 0 to 110976 Oct 30 00:02:28.681009 (sd-merge)[1310]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Oct 30 00:02:28.684198 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 30 00:02:28.687445 (sd-merge)[1310]: Merged extensions into '/usr'. Oct 30 00:02:28.694114 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Oct 30 00:02:28.694129 systemd[1]: Reloading... Oct 30 00:02:28.761632 zram_generator::config[1342]: No configuration found. Oct 30 00:02:28.799488 systemd-resolved[1304]: Positive Trust Anchors: Oct 30 00:02:28.799693 systemd-resolved[1304]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 30 00:02:28.799698 systemd-resolved[1304]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Oct 30 00:02:28.799730 systemd-resolved[1304]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 30 00:02:28.803984 systemd-resolved[1304]: Defaulting to hostname 'linux'. Oct 30 00:02:29.029544 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 30 00:02:29.030228 systemd[1]: Reloading finished in 335 ms. Oct 30 00:02:29.067849 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 30 00:02:29.070047 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 30 00:02:29.074477 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 30 00:02:29.087064 systemd[1]: Starting ensure-sysext.service... Oct 30 00:02:29.089576 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 30 00:02:29.107138 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Oct 30 00:02:29.107156 systemd[1]: Reloading... Oct 30 00:02:29.112747 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Oct 30 00:02:29.112790 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Oct 30 00:02:29.113107 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 30 00:02:29.113407 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 30 00:02:29.114403 systemd-tmpfiles[1380]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 30 00:02:29.114696 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Oct 30 00:02:29.114770 systemd-tmpfiles[1380]: ACLs are not supported, ignoring. Oct 30 00:02:29.120340 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:02:29.120352 systemd-tmpfiles[1380]: Skipping /boot Oct 30 00:02:29.158637 zram_generator::config[1410]: No configuration found. Oct 30 00:02:29.206272 systemd-tmpfiles[1380]: Detected autofs mount point /boot during canonicalization of boot. Oct 30 00:02:29.206292 systemd-tmpfiles[1380]: Skipping /boot Oct 30 00:02:29.352439 systemd[1]: Reloading finished in 244 ms. Oct 30 00:02:29.395228 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 30 00:02:29.407291 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:02:29.433272 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 30 00:02:29.438899 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 30 00:02:29.445465 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 30 00:02:29.457921 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 30 00:02:29.462538 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 30 00:02:29.471438 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.471824 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:02:29.480208 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:02:29.487017 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:02:29.494299 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:02:29.496472 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:02:29.496592 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:02:29.498275 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 30 00:02:29.500735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.502886 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 30 00:02:29.506222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:02:29.506444 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:02:29.510131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:02:29.510568 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:02:29.514324 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:02:29.514830 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:02:29.537237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.537742 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:02:29.540013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:02:29.545818 augenrules[1482]: No rules Oct 30 00:02:29.545839 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:02:29.558989 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:02:29.561511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:02:29.562005 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:02:29.562424 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.566493 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:02:29.567496 systemd-udevd[1473]: Using default interface naming scheme 'v257'. Oct 30 00:02:29.569105 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:02:29.573218 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 30 00:02:29.576357 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 30 00:02:29.579183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:02:29.579453 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:02:29.582431 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:02:29.582757 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:02:29.585631 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:02:29.585917 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:02:29.599156 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.600807 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:02:29.602696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 30 00:02:29.604825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 30 00:02:29.609566 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 30 00:02:29.615871 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 30 00:02:29.623875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 30 00:02:29.625903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 30 00:02:29.626032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 30 00:02:29.626166 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 30 00:02:29.626244 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 30 00:02:29.627398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 30 00:02:29.630284 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 30 00:02:29.630524 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 30 00:02:29.642590 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 30 00:02:29.646169 systemd[1]: Finished ensure-sysext.service. Oct 30 00:02:29.655158 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 30 00:02:29.657509 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 30 00:02:29.657811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 30 00:02:29.660164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 30 00:02:29.660388 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 30 00:02:29.664814 augenrules[1495]: /sbin/augenrules: No change Oct 30 00:02:29.664024 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 30 00:02:29.664412 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 30 00:02:29.680133 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 30 00:02:29.680215 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 30 00:02:29.835791 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Oct 30 00:02:30.077798 augenrules[1548]: No rules Oct 30 00:02:30.082005 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:02:30.082302 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:02:30.085624 kernel: mousedev: PS/2 mouse device common for all mice Oct 30 00:02:30.104162 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 30 00:02:30.109767 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 30 00:02:30.124657 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Oct 30 00:02:30.140273 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 30 00:02:30.142628 kernel: ACPI: button: Power Button [PWRF] Oct 30 00:02:30.187490 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 30 00:02:30.190037 systemd[1]: Reached target time-set.target - System Time Set. Oct 30 00:02:30.248454 systemd-networkd[1516]: lo: Link UP Oct 30 00:02:30.248474 systemd-networkd[1516]: lo: Gained carrier Oct 30 00:02:30.264895 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 30 00:02:30.273063 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:02:30.279596 systemd[1]: Reached target network.target - Network. Oct 30 00:02:30.280351 systemd-networkd[1516]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 30 00:02:30.282095 systemd-networkd[1516]: eth0: Link UP Oct 30 00:02:30.282334 systemd-networkd[1516]: eth0: Gained carrier Oct 30 00:02:30.282351 systemd-networkd[1516]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Oct 30 00:02:30.284980 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 30 00:02:30.289371 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 30 00:02:30.317708 systemd-networkd[1516]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 30 00:02:30.326255 systemd-timesyncd[1521]: Network configuration changed, trying to establish connection. Oct 30 00:02:30.327245 systemd-timesyncd[1521]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 30 00:02:30.327364 systemd-timesyncd[1521]: Initial clock synchronization to Thu 2025-10-30 00:02:30.407279 UTC. Oct 30 00:02:30.338206 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Oct 30 00:02:30.339939 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Oct 30 00:02:30.340175 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Oct 30 00:02:30.337026 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 30 00:02:30.352634 kernel: kvm_amd: TSC scaling supported Oct 30 00:02:30.352687 kernel: kvm_amd: Nested Virtualization enabled Oct 30 00:02:30.352702 kernel: kvm_amd: Nested Paging enabled Oct 30 00:02:30.352717 kernel: kvm_amd: LBR virtualization supported Oct 30 00:02:30.354877 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Oct 30 00:02:30.356798 kernel: kvm_amd: Virtual GIF supported Oct 30 00:02:30.354523 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:02:30.374701 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 30 00:02:30.374998 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:30.378895 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 30 00:02:30.458161 kernel: EDAC MC: Ver: 3.0.0 Oct 30 00:02:30.476834 ldconfig[1450]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 30 00:02:30.486225 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 30 00:02:30.489004 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 30 00:02:30.506198 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 30 00:02:30.516414 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 30 00:02:30.518738 systemd[1]: Reached target sysinit.target - System Initialization. Oct 30 00:02:30.520806 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 30 00:02:30.523258 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 30 00:02:30.525798 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Oct 30 00:02:30.528206 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 30 00:02:30.530364 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 30 00:02:30.532756 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 30 00:02:30.535096 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 30 00:02:30.535132 systemd[1]: Reached target paths.target - Path Units. Oct 30 00:02:30.536667 systemd[1]: Reached target timers.target - Timer Units. Oct 30 00:02:30.539990 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 30 00:02:30.544600 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 30 00:02:30.549734 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 30 00:02:30.552365 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 30 00:02:30.554860 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 30 00:02:30.563496 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 30 00:02:30.565928 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 30 00:02:30.568903 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 30 00:02:30.571653 systemd[1]: Reached target sockets.target - Socket Units. Oct 30 00:02:30.573419 systemd[1]: Reached target basic.target - Basic System. Oct 30 00:02:30.575158 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:02:30.575194 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 30 00:02:30.576712 systemd[1]: Starting containerd.service - containerd container runtime... Oct 30 00:02:30.580045 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 30 00:02:30.582757 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 30 00:02:30.585814 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 30 00:02:30.588748 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 30 00:02:30.589469 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 30 00:02:30.592055 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Oct 30 00:02:30.596858 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 30 00:02:30.597571 jq[1603]: false Oct 30 00:02:30.601663 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 30 00:02:30.603536 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 30 00:02:30.606193 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing passwd entry cache Oct 30 00:02:30.608795 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 30 00:02:30.608849 oslogin_cache_refresh[1605]: Refreshing passwd entry cache Oct 30 00:02:30.616195 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 30 00:02:30.618318 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting users, quitting Oct 30 00:02:30.618685 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 30 00:02:30.618772 oslogin_cache_refresh[1605]: Failure getting users, quitting Oct 30 00:02:30.619253 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 30 00:02:30.619424 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:02:30.619424 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Refreshing group entry cache Oct 30 00:02:30.618807 oslogin_cache_refresh[1605]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Oct 30 00:02:30.618872 oslogin_cache_refresh[1605]: Refreshing group entry cache Oct 30 00:02:30.620733 systemd[1]: Starting update-engine.service - Update Engine... Oct 30 00:02:30.625226 extend-filesystems[1604]: Found /dev/vda6 Oct 30 00:02:30.628716 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Failure getting groups, quitting Oct 30 00:02:30.628716 google_oslogin_nss_cache[1605]: oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:02:30.625427 oslogin_cache_refresh[1605]: Failure getting groups, quitting Oct 30 00:02:30.625440 oslogin_cache_refresh[1605]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Oct 30 00:02:30.629857 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 30 00:02:30.635681 extend-filesystems[1604]: Found /dev/vda9 Oct 30 00:02:30.635733 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 30 00:02:30.640026 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 30 00:02:30.640437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 30 00:02:30.640945 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Oct 30 00:02:30.642992 extend-filesystems[1604]: Checking size of /dev/vda9 Oct 30 00:02:30.647304 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Oct 30 00:02:30.651050 systemd[1]: motdgen.service: Deactivated successfully. Oct 30 00:02:30.651569 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 30 00:02:30.655116 jq[1621]: true Oct 30 00:02:30.656673 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 30 00:02:30.658838 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 30 00:02:30.683698 extend-filesystems[1604]: Resized partition /dev/vda9 Oct 30 00:02:30.692433 extend-filesystems[1639]: resize2fs 1.47.3 (8-Jul-2025) Oct 30 00:02:30.703089 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Oct 30 00:02:30.743661 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Oct 30 00:02:30.750243 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 30 00:02:30.769250 update_engine[1616]: I20251030 00:02:30.750454 1616 main.cc:92] Flatcar Update Engine starting Oct 30 00:02:30.769594 jq[1632]: true Oct 30 00:02:30.772367 extend-filesystems[1639]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 30 00:02:30.772367 extend-filesystems[1639]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 30 00:02:30.772367 extend-filesystems[1639]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Oct 30 00:02:30.779409 extend-filesystems[1604]: Resized filesystem in /dev/vda9 Oct 30 00:02:30.782977 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 30 00:02:30.783907 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 30 00:02:30.794446 tar[1631]: linux-amd64/LICENSE Oct 30 00:02:30.794933 tar[1631]: linux-amd64/helm Oct 30 00:02:30.821818 dbus-daemon[1601]: [system] SELinux support is enabled Oct 30 00:02:30.822719 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 30 00:02:30.827871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 30 00:02:30.827925 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 30 00:02:30.830376 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 30 00:02:30.830402 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 30 00:02:30.841072 systemd[1]: Started update-engine.service - Update Engine. Oct 30 00:02:30.844699 update_engine[1616]: I20251030 00:02:30.844592 1616 update_check_scheduler.cc:74] Next update check in 6m53s Oct 30 00:02:30.845054 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 30 00:02:30.846144 systemd-logind[1613]: Watching system buttons on /dev/input/event2 (Power Button) Oct 30 00:02:30.846185 systemd-logind[1613]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 30 00:02:30.850468 bash[1670]: Updated "/home/core/.ssh/authorized_keys" Oct 30 00:02:30.864182 systemd-logind[1613]: New seat seat0. Oct 30 00:02:30.882716 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 30 00:02:30.885026 systemd[1]: Started systemd-logind.service - User Login Management. Oct 30 00:02:30.899010 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 30 00:02:30.983276 locksmithd[1671]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 30 00:02:31.022875 sshd_keygen[1626]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 30 00:02:31.055273 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 30 00:02:31.060007 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 30 00:02:31.235390 systemd[1]: issuegen.service: Deactivated successfully. Oct 30 00:02:31.235777 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 30 00:02:31.243907 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 30 00:02:31.275076 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 30 00:02:31.280902 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 30 00:02:31.287267 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Oct 30 00:02:31.288279 systemd[1]: Reached target getty.target - Login Prompts. Oct 30 00:02:31.624579 tar[1631]: linux-amd64/README.md Oct 30 00:02:31.643272 containerd[1633]: time="2025-10-30T00:02:31Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Oct 30 00:02:31.659245 containerd[1633]: time="2025-10-30T00:02:31.659139186Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Oct 30 00:02:31.668221 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 30 00:02:31.674109 containerd[1633]: time="2025-10-30T00:02:31.674038092Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="21.598µs" Oct 30 00:02:31.674109 containerd[1633]: time="2025-10-30T00:02:31.674093521Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Oct 30 00:02:31.674188 containerd[1633]: time="2025-10-30T00:02:31.674120255Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Oct 30 00:02:31.674442 containerd[1633]: time="2025-10-30T00:02:31.674407945Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Oct 30 00:02:31.674442 containerd[1633]: time="2025-10-30T00:02:31.674433489Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Oct 30 00:02:31.674501 containerd[1633]: time="2025-10-30T00:02:31.674468640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:02:31.674596 containerd[1633]: time="2025-10-30T00:02:31.674570790Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Oct 30 00:02:31.674596 containerd[1633]: time="2025-10-30T00:02:31.674590213Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675015 containerd[1633]: time="2025-10-30T00:02:31.674973276Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675015 containerd[1633]: time="2025-10-30T00:02:31.674993524Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675015 containerd[1633]: time="2025-10-30T00:02:31.675004821Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675015 containerd[1633]: time="2025-10-30T00:02:31.675012716Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675365 containerd[1633]: time="2025-10-30T00:02:31.675323726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675778 containerd[1633]: time="2025-10-30T00:02:31.675734791Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675815 containerd[1633]: time="2025-10-30T00:02:31.675785095Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Oct 30 00:02:31.675860 containerd[1633]: time="2025-10-30T00:02:31.675836477Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Oct 30 00:02:31.675917 containerd[1633]: time="2025-10-30T00:02:31.675891413Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Oct 30 00:02:31.676355 containerd[1633]: time="2025-10-30T00:02:31.676276883Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Oct 30 00:02:31.676504 containerd[1633]: time="2025-10-30T00:02:31.676467135Z" level=info msg="metadata content store policy set" policy=shared Oct 30 00:02:31.683759 containerd[1633]: time="2025-10-30T00:02:31.683676756Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683776458Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683803927Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683827398Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683849337Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683865136Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683881961Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683899290Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Oct 30 00:02:31.683915 containerd[1633]: time="2025-10-30T00:02:31.683918562Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Oct 30 00:02:31.684204 containerd[1633]: time="2025-10-30T00:02:31.683938297Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Oct 30 00:02:31.684204 containerd[1633]: time="2025-10-30T00:02:31.683955002Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Oct 30 00:02:31.684204 containerd[1633]: time="2025-10-30T00:02:31.683977918Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Oct 30 00:02:31.684305 containerd[1633]: time="2025-10-30T00:02:31.684246850Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Oct 30 00:02:31.684305 containerd[1633]: time="2025-10-30T00:02:31.684288213Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Oct 30 00:02:31.684375 containerd[1633]: time="2025-10-30T00:02:31.684310275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Oct 30 00:02:31.684375 containerd[1633]: time="2025-10-30T00:02:31.684330029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Oct 30 00:02:31.684375 containerd[1633]: time="2025-10-30T00:02:31.684345505Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Oct 30 00:02:31.684375 containerd[1633]: time="2025-10-30T00:02:31.684361163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Oct 30 00:02:31.684375 containerd[1633]: time="2025-10-30T00:02:31.684376820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Oct 30 00:02:31.684523 containerd[1633]: time="2025-10-30T00:02:31.684395508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Oct 30 00:02:31.684523 containerd[1633]: time="2025-10-30T00:02:31.684412877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Oct 30 00:02:31.684523 containerd[1633]: time="2025-10-30T00:02:31.684442208Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Oct 30 00:02:31.684523 containerd[1633]: time="2025-10-30T00:02:31.684459426Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Oct 30 00:02:31.684671 containerd[1633]: time="2025-10-30T00:02:31.684581209Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Oct 30 00:02:31.684671 containerd[1633]: time="2025-10-30T00:02:31.684628091Z" level=info msg="Start snapshots syncer" Oct 30 00:02:31.684727 containerd[1633]: time="2025-10-30T00:02:31.684685695Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Oct 30 00:02:31.685093 containerd[1633]: time="2025-10-30T00:02:31.685035148Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Oct 30 00:02:31.685324 containerd[1633]: time="2025-10-30T00:02:31.685094746Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Oct 30 00:02:31.685324 containerd[1633]: time="2025-10-30T00:02:31.685195375Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Oct 30 00:02:31.685411 containerd[1633]: time="2025-10-30T00:02:31.685377683Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Oct 30 00:02:31.685457 containerd[1633]: time="2025-10-30T00:02:31.685415856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Oct 30 00:02:31.685457 containerd[1633]: time="2025-10-30T00:02:31.685431513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Oct 30 00:02:31.685457 containerd[1633]: time="2025-10-30T00:02:31.685446183Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Oct 30 00:02:31.685558 containerd[1633]: time="2025-10-30T00:02:31.685467236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Oct 30 00:02:31.685558 containerd[1633]: time="2025-10-30T00:02:31.685482844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Oct 30 00:02:31.685558 containerd[1633]: time="2025-10-30T00:02:31.685498219Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Oct 30 00:02:31.685558 containerd[1633]: time="2025-10-30T00:02:31.685541526Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Oct 30 00:02:31.685558 containerd[1633]: time="2025-10-30T00:02:31.685560667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685579707Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685655284Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685675433Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685686951Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685698591Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685708993Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685721065Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Oct 30 00:02:31.685737 containerd[1633]: time="2025-10-30T00:02:31.685737226Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Oct 30 00:02:31.685955 containerd[1633]: time="2025-10-30T00:02:31.685763919Z" level=info msg="runtime interface created" Oct 30 00:02:31.685955 containerd[1633]: time="2025-10-30T00:02:31.685771672Z" level=info msg="created NRI interface" Oct 30 00:02:31.685955 containerd[1633]: time="2025-10-30T00:02:31.685784187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Oct 30 00:02:31.685955 containerd[1633]: time="2025-10-30T00:02:31.685799251Z" level=info msg="Connect containerd service" Oct 30 00:02:31.685955 containerd[1633]: time="2025-10-30T00:02:31.685850512Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 30 00:02:31.687036 containerd[1633]: time="2025-10-30T00:02:31.686991616Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:02:31.856655 containerd[1633]: time="2025-10-30T00:02:31.856539896Z" level=info msg="Start subscribing containerd event" Oct 30 00:02:31.856923 containerd[1633]: time="2025-10-30T00:02:31.856690719Z" level=info msg="Start recovering state" Oct 30 00:02:31.858252 containerd[1633]: time="2025-10-30T00:02:31.858166415Z" level=info msg="Start event monitor" Oct 30 00:02:31.858252 containerd[1633]: time="2025-10-30T00:02:31.858207626Z" level=info msg="Start cni network conf syncer for default" Oct 30 00:02:31.858252 containerd[1633]: time="2025-10-30T00:02:31.858227925Z" level=info msg="Start streaming server" Oct 30 00:02:31.858252 containerd[1633]: time="2025-10-30T00:02:31.858249051Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Oct 30 00:02:31.858636 containerd[1633]: time="2025-10-30T00:02:31.858260308Z" level=info msg="runtime interface starting up..." Oct 30 00:02:31.858636 containerd[1633]: time="2025-10-30T00:02:31.858273941Z" level=info msg="starting plugins..." Oct 30 00:02:31.858636 containerd[1633]: time="2025-10-30T00:02:31.858294572Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Oct 30 00:02:31.858636 containerd[1633]: time="2025-10-30T00:02:31.858431913Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 30 00:02:31.858636 containerd[1633]: time="2025-10-30T00:02:31.858530800Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 30 00:02:31.858917 containerd[1633]: time="2025-10-30T00:02:31.858656128Z" level=info msg="containerd successfully booted in 0.216280s" Oct 30 00:02:31.858916 systemd[1]: Started containerd.service - containerd container runtime. Oct 30 00:02:31.995952 systemd-networkd[1516]: eth0: Gained IPv6LL Oct 30 00:02:32.000467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 30 00:02:32.003516 systemd[1]: Reached target network-online.target - Network is Online. Oct 30 00:02:32.007159 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 30 00:02:32.010820 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:32.033390 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 30 00:02:32.066904 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 30 00:02:32.069957 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 30 00:02:32.070312 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 30 00:02:32.074060 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 30 00:02:32.435598 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 30 00:02:32.439076 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:44782.service - OpenSSH per-connection server daemon (10.0.0.1:44782). Oct 30 00:02:32.530751 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 44782 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:32.533004 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:32.540996 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 30 00:02:32.544550 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 30 00:02:32.556578 systemd-logind[1613]: New session 1 of user core. Oct 30 00:02:32.575739 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 30 00:02:32.582828 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 30 00:02:32.602587 (systemd)[1743]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 30 00:02:32.605542 systemd-logind[1613]: New session c1 of user core. Oct 30 00:02:32.786195 systemd[1743]: Queued start job for default target default.target. Oct 30 00:02:32.801128 systemd[1743]: Created slice app.slice - User Application Slice. Oct 30 00:02:32.801158 systemd[1743]: Reached target paths.target - Paths. Oct 30 00:02:32.801206 systemd[1743]: Reached target timers.target - Timers. Oct 30 00:02:32.802908 systemd[1743]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 30 00:02:32.819116 systemd[1743]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 30 00:02:32.819249 systemd[1743]: Reached target sockets.target - Sockets. Oct 30 00:02:32.819288 systemd[1743]: Reached target basic.target - Basic System. Oct 30 00:02:32.819332 systemd[1743]: Reached target default.target - Main User Target. Oct 30 00:02:32.819374 systemd[1743]: Startup finished in 191ms. Oct 30 00:02:32.820234 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 30 00:02:32.824103 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 30 00:02:32.891229 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:44796.service - OpenSSH per-connection server daemon (10.0.0.1:44796). Oct 30 00:02:33.012999 sshd[1754]: Accepted publickey for core from 10.0.0.1 port 44796 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:33.014762 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:33.019443 systemd-logind[1613]: New session 2 of user core. Oct 30 00:02:33.028757 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 30 00:02:33.086979 sshd[1757]: Connection closed by 10.0.0.1 port 44796 Oct 30 00:02:33.087348 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:33.105137 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:44796.service: Deactivated successfully. Oct 30 00:02:33.107259 systemd[1]: session-2.scope: Deactivated successfully. Oct 30 00:02:33.108185 systemd-logind[1613]: Session 2 logged out. Waiting for processes to exit. Oct 30 00:02:33.121885 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:44810.service - OpenSSH per-connection server daemon (10.0.0.1:44810). Oct 30 00:02:33.126252 systemd-logind[1613]: Removed session 2. Oct 30 00:02:33.136101 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:33.138766 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 30 00:02:33.141029 systemd[1]: Startup finished in 3.055s (kernel) + 7.256s (initrd) + 5.922s (userspace) = 16.234s. Oct 30 00:02:33.141842 (kubelet)[1769]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:02:33.179065 sshd[1765]: Accepted publickey for core from 10.0.0.1 port 44810 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:33.180741 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:33.185560 systemd-logind[1613]: New session 3 of user core. Oct 30 00:02:33.192761 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 30 00:02:33.247889 sshd[1774]: Connection closed by 10.0.0.1 port 44810 Oct 30 00:02:33.248219 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:33.253758 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:44810.service: Deactivated successfully. Oct 30 00:02:33.255865 systemd[1]: session-3.scope: Deactivated successfully. Oct 30 00:02:33.256877 systemd-logind[1613]: Session 3 logged out. Waiting for processes to exit. Oct 30 00:02:33.258081 systemd-logind[1613]: Removed session 3. Oct 30 00:02:33.534237 kubelet[1769]: E1030 00:02:33.534153 1769 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:02:33.538148 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:02:33.538408 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:02:33.538938 systemd[1]: kubelet.service: Consumed 1.286s CPU time, 257.4M memory peak. Oct 30 00:02:43.301377 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:49674.service - OpenSSH per-connection server daemon (10.0.0.1:49674). Oct 30 00:02:43.378184 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 49674 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:43.380539 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:43.387643 systemd-logind[1613]: New session 4 of user core. Oct 30 00:02:43.394859 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 30 00:02:43.456069 sshd[1792]: Connection closed by 10.0.0.1 port 49674 Oct 30 00:02:43.456458 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:43.471824 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:49674.service: Deactivated successfully. Oct 30 00:02:43.473905 systemd[1]: session-4.scope: Deactivated successfully. Oct 30 00:02:43.474883 systemd-logind[1613]: Session 4 logged out. Waiting for processes to exit. Oct 30 00:02:43.478298 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:49688.service - OpenSSH per-connection server daemon (10.0.0.1:49688). Oct 30 00:02:43.479089 systemd-logind[1613]: Removed session 4. Oct 30 00:02:43.542307 sshd[1798]: Accepted publickey for core from 10.0.0.1 port 49688 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:43.544380 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:43.545918 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 30 00:02:43.548193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:43.551411 systemd-logind[1613]: New session 5 of user core. Oct 30 00:02:43.564843 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 30 00:02:43.620678 sshd[1804]: Connection closed by 10.0.0.1 port 49688 Oct 30 00:02:43.621532 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:43.633796 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:49688.service: Deactivated successfully. Oct 30 00:02:43.636487 systemd[1]: session-5.scope: Deactivated successfully. Oct 30 00:02:43.637552 systemd-logind[1613]: Session 5 logged out. Waiting for processes to exit. Oct 30 00:02:43.641421 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:49692.service - OpenSSH per-connection server daemon (10.0.0.1:49692). Oct 30 00:02:43.643392 systemd-logind[1613]: Removed session 5. Oct 30 00:02:43.709903 sshd[1810]: Accepted publickey for core from 10.0.0.1 port 49692 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:43.711728 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:43.716737 systemd-logind[1613]: New session 6 of user core. Oct 30 00:02:43.725869 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 30 00:02:43.781693 sshd[1813]: Connection closed by 10.0.0.1 port 49692 Oct 30 00:02:43.782055 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:43.801966 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:49692.service: Deactivated successfully. Oct 30 00:02:43.804175 systemd[1]: session-6.scope: Deactivated successfully. Oct 30 00:02:43.806790 systemd-logind[1613]: Session 6 logged out. Waiting for processes to exit. Oct 30 00:02:43.808346 systemd-logind[1613]: Removed session 6. Oct 30 00:02:43.819950 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:49698.service - OpenSSH per-connection server daemon (10.0.0.1:49698). Oct 30 00:02:43.841942 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:43.852943 (kubelet)[1827]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:02:43.884203 sshd[1823]: Accepted publickey for core from 10.0.0.1 port 49698 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:43.886091 sshd-session[1823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:43.892052 systemd-logind[1613]: New session 7 of user core. Oct 30 00:02:43.896857 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 30 00:02:43.907041 kubelet[1827]: E1030 00:02:43.906959 1827 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:02:43.915048 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:02:43.915280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:02:43.915741 systemd[1]: kubelet.service: Consumed 329ms CPU time, 111.2M memory peak. Oct 30 00:02:43.965207 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 30 00:02:43.965531 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:02:43.991250 sudo[1839]: pam_unix(sudo:session): session closed for user root Oct 30 00:02:43.994016 sshd[1836]: Connection closed by 10.0.0.1 port 49698 Oct 30 00:02:43.994467 sshd-session[1823]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:44.007805 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:49698.service: Deactivated successfully. Oct 30 00:02:44.009980 systemd[1]: session-7.scope: Deactivated successfully. Oct 30 00:02:44.011152 systemd-logind[1613]: Session 7 logged out. Waiting for processes to exit. Oct 30 00:02:44.014128 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Oct 30 00:02:44.015031 systemd-logind[1613]: Removed session 7. Oct 30 00:02:44.085782 sshd[1845]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:44.087743 sshd-session[1845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:44.093520 systemd-logind[1613]: New session 8 of user core. Oct 30 00:02:44.102828 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 30 00:02:44.160410 sudo[1850]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 30 00:02:44.160849 sudo[1850]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:02:44.652902 sudo[1850]: pam_unix(sudo:session): session closed for user root Oct 30 00:02:44.662827 sudo[1849]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 30 00:02:44.663252 sudo[1849]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:02:44.679197 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 30 00:02:44.744843 augenrules[1872]: No rules Oct 30 00:02:44.746905 systemd[1]: audit-rules.service: Deactivated successfully. Oct 30 00:02:44.747229 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 30 00:02:44.748808 sudo[1849]: pam_unix(sudo:session): session closed for user root Oct 30 00:02:44.751504 sshd[1848]: Connection closed by 10.0.0.1 port 49710 Oct 30 00:02:44.751799 sshd-session[1845]: pam_unix(sshd:session): session closed for user core Oct 30 00:02:44.763691 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:49710.service: Deactivated successfully. Oct 30 00:02:44.765693 systemd[1]: session-8.scope: Deactivated successfully. Oct 30 00:02:44.766627 systemd-logind[1613]: Session 8 logged out. Waiting for processes to exit. Oct 30 00:02:44.769537 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:49712.service - OpenSSH per-connection server daemon (10.0.0.1:49712). Oct 30 00:02:44.770212 systemd-logind[1613]: Removed session 8. Oct 30 00:02:44.850829 sshd[1881]: Accepted publickey for core from 10.0.0.1 port 49712 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:02:44.852759 sshd-session[1881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:02:44.858755 systemd-logind[1613]: New session 9 of user core. Oct 30 00:02:44.869139 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 30 00:02:44.927696 sudo[1885]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 30 00:02:44.928146 sudo[1885]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 30 00:02:45.318432 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 30 00:02:45.330957 (dockerd)[1906]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 30 00:02:45.618312 dockerd[1906]: time="2025-10-30T00:02:45.618131224Z" level=info msg="Starting up" Oct 30 00:02:45.619173 dockerd[1906]: time="2025-10-30T00:02:45.619147322Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Oct 30 00:02:45.633796 dockerd[1906]: time="2025-10-30T00:02:45.633736089Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Oct 30 00:02:45.695375 dockerd[1906]: time="2025-10-30T00:02:45.695306026Z" level=info msg="Loading containers: start." Oct 30 00:02:45.708650 kernel: Initializing XFRM netlink socket Oct 30 00:02:45.999767 systemd-networkd[1516]: docker0: Link UP Oct 30 00:02:46.004979 dockerd[1906]: time="2025-10-30T00:02:46.004942091Z" level=info msg="Loading containers: done." Oct 30 00:02:46.020448 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2412147028-merged.mount: Deactivated successfully. Oct 30 00:02:46.020962 dockerd[1906]: time="2025-10-30T00:02:46.020922656Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 30 00:02:46.021031 dockerd[1906]: time="2025-10-30T00:02:46.021000238Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Oct 30 00:02:46.021122 dockerd[1906]: time="2025-10-30T00:02:46.021103607Z" level=info msg="Initializing buildkit" Oct 30 00:02:46.051636 dockerd[1906]: time="2025-10-30T00:02:46.051582568Z" level=info msg="Completed buildkit initialization" Oct 30 00:02:46.055692 dockerd[1906]: time="2025-10-30T00:02:46.055641027Z" level=info msg="Daemon has completed initialization" Oct 30 00:02:46.055828 dockerd[1906]: time="2025-10-30T00:02:46.055740713Z" level=info msg="API listen on /run/docker.sock" Oct 30 00:02:46.055975 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 30 00:02:46.652734 containerd[1633]: time="2025-10-30T00:02:46.652646065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Oct 30 00:02:47.187198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903433178.mount: Deactivated successfully. Oct 30 00:02:48.148253 containerd[1633]: time="2025-10-30T00:02:48.148170034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:48.149078 containerd[1633]: time="2025-10-30T00:02:48.149030471Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Oct 30 00:02:48.150547 containerd[1633]: time="2025-10-30T00:02:48.150505976Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:48.153657 containerd[1633]: time="2025-10-30T00:02:48.153578535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:48.154836 containerd[1633]: time="2025-10-30T00:02:48.154770927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 1.502030822s" Oct 30 00:02:48.154836 containerd[1633]: time="2025-10-30T00:02:48.154820011Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Oct 30 00:02:48.155561 containerd[1633]: time="2025-10-30T00:02:48.155504796Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Oct 30 00:02:50.884711 containerd[1633]: time="2025-10-30T00:02:50.884546473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:50.892496 containerd[1633]: time="2025-10-30T00:02:50.891226179Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Oct 30 00:02:50.895184 containerd[1633]: time="2025-10-30T00:02:50.893333216Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:50.904068 containerd[1633]: time="2025-10-30T00:02:50.902236567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:50.908173 containerd[1633]: time="2025-10-30T00:02:50.908039948Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 2.75246856s" Oct 30 00:02:50.908173 containerd[1633]: time="2025-10-30T00:02:50.908171175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Oct 30 00:02:50.909029 containerd[1633]: time="2025-10-30T00:02:50.908781112Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Oct 30 00:02:52.753156 containerd[1633]: time="2025-10-30T00:02:52.753069405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:52.754161 containerd[1633]: time="2025-10-30T00:02:52.754112243Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Oct 30 00:02:52.755692 containerd[1633]: time="2025-10-30T00:02:52.755662084Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:52.759465 containerd[1633]: time="2025-10-30T00:02:52.759432327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:52.761352 containerd[1633]: time="2025-10-30T00:02:52.761276549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 1.852458295s" Oct 30 00:02:52.761433 containerd[1633]: time="2025-10-30T00:02:52.761357204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Oct 30 00:02:52.762059 containerd[1633]: time="2025-10-30T00:02:52.762001211Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Oct 30 00:02:53.994008 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 30 00:02:53.997828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:02:54.311075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:02:54.321021 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:02:54.374595 kubelet[2204]: E1030 00:02:54.374479 2204 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:02:54.379919 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:02:54.380230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:02:54.380820 systemd[1]: kubelet.service: Consumed 331ms CPU time, 110.5M memory peak. Oct 30 00:02:54.392766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1266169863.mount: Deactivated successfully. Oct 30 00:02:55.512085 containerd[1633]: time="2025-10-30T00:02:55.511917369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:55.513916 containerd[1633]: time="2025-10-30T00:02:55.513781853Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Oct 30 00:02:55.516011 containerd[1633]: time="2025-10-30T00:02:55.515744275Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:55.518899 containerd[1633]: time="2025-10-30T00:02:55.518835321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:55.519471 containerd[1633]: time="2025-10-30T00:02:55.519421020Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 2.757381802s" Oct 30 00:02:55.519544 containerd[1633]: time="2025-10-30T00:02:55.519471247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Oct 30 00:02:55.520247 containerd[1633]: time="2025-10-30T00:02:55.520203939Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Oct 30 00:02:57.021961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898321224.mount: Deactivated successfully. Oct 30 00:02:59.434889 containerd[1633]: time="2025-10-30T00:02:59.434776501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:59.436806 containerd[1633]: time="2025-10-30T00:02:59.436763686Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Oct 30 00:02:59.440855 containerd[1633]: time="2025-10-30T00:02:59.440802441Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:59.444893 containerd[1633]: time="2025-10-30T00:02:59.444779088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:02:59.446076 containerd[1633]: time="2025-10-30T00:02:59.446015389Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 3.925768046s" Oct 30 00:02:59.446076 containerd[1633]: time="2025-10-30T00:02:59.446065655Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Oct 30 00:02:59.449551 containerd[1633]: time="2025-10-30T00:02:59.449513878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Oct 30 00:03:01.362883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750301968.mount: Deactivated successfully. Oct 30 00:03:01.373136 containerd[1633]: time="2025-10-30T00:03:01.373070861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:01.375781 containerd[1633]: time="2025-10-30T00:03:01.375658317Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Oct 30 00:03:01.380665 containerd[1633]: time="2025-10-30T00:03:01.380588815Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:01.383562 containerd[1633]: time="2025-10-30T00:03:01.383469844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:01.384462 containerd[1633]: time="2025-10-30T00:03:01.384412120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.934860631s" Oct 30 00:03:01.384462 containerd[1633]: time="2025-10-30T00:03:01.384443822Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Oct 30 00:03:01.385845 containerd[1633]: time="2025-10-30T00:03:01.385798619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Oct 30 00:03:04.494196 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Oct 30 00:03:04.496780 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:03:05.120504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:05.144207 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 30 00:03:05.284708 kubelet[2323]: E1030 00:03:05.284635 2323 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 30 00:03:05.288780 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 30 00:03:05.288982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 30 00:03:05.289414 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.4M memory peak. Oct 30 00:03:05.645974 containerd[1633]: time="2025-10-30T00:03:05.645865187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:05.647813 containerd[1633]: time="2025-10-30T00:03:05.647729671Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Oct 30 00:03:05.649756 containerd[1633]: time="2025-10-30T00:03:05.649679102Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:05.654654 containerd[1633]: time="2025-10-30T00:03:05.654568222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:05.657629 containerd[1633]: time="2025-10-30T00:03:05.656995395Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 4.271156443s" Oct 30 00:03:05.657629 containerd[1633]: time="2025-10-30T00:03:05.657062157Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Oct 30 00:03:08.595069 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:08.595246 systemd[1]: kubelet.service: Consumed 399ms CPU time, 110.4M memory peak. Oct 30 00:03:08.597813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:03:08.647686 systemd[1]: Reload requested from client PID 2363 ('systemctl') (unit session-9.scope)... Oct 30 00:03:08.647710 systemd[1]: Reloading... Oct 30 00:03:08.789936 zram_generator::config[2407]: No configuration found. Oct 30 00:03:09.548444 systemd[1]: Reloading finished in 900 ms. Oct 30 00:03:09.639835 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Oct 30 00:03:09.639945 systemd[1]: kubelet.service: Failed with result 'signal'. Oct 30 00:03:09.640341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:09.640397 systemd[1]: kubelet.service: Consumed 198ms CPU time, 98.3M memory peak. Oct 30 00:03:09.642452 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:03:09.911871 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:09.943103 (kubelet)[2455]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:03:10.037246 kubelet[2455]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:03:10.037246 kubelet[2455]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:03:10.037673 kubelet[2455]: I1030 00:03:10.037280 2455 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:03:11.735274 kubelet[2455]: I1030 00:03:11.735174 2455 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 00:03:11.735274 kubelet[2455]: I1030 00:03:11.735226 2455 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:03:11.735274 kubelet[2455]: I1030 00:03:11.735275 2455 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 00:03:11.735274 kubelet[2455]: I1030 00:03:11.735282 2455 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:03:11.735888 kubelet[2455]: I1030 00:03:11.735583 2455 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:03:11.743108 kubelet[2455]: E1030 00:03:11.743029 2455 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Oct 30 00:03:11.743311 kubelet[2455]: I1030 00:03:11.743167 2455 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:03:11.748711 kubelet[2455]: I1030 00:03:11.748222 2455 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:03:11.755729 kubelet[2455]: I1030 00:03:11.755677 2455 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 00:03:11.756085 kubelet[2455]: I1030 00:03:11.756033 2455 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:03:11.756281 kubelet[2455]: I1030 00:03:11.756065 2455 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:03:11.756281 kubelet[2455]: I1030 00:03:11.756278 2455 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:03:11.756480 kubelet[2455]: I1030 00:03:11.756296 2455 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 00:03:11.756480 kubelet[2455]: I1030 00:03:11.756438 2455 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 00:03:11.763634 kubelet[2455]: I1030 00:03:11.763533 2455 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:03:11.763880 kubelet[2455]: I1030 00:03:11.763850 2455 kubelet.go:475] "Attempting to sync node with API server" Oct 30 00:03:11.763880 kubelet[2455]: I1030 00:03:11.763872 2455 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:03:11.764020 kubelet[2455]: I1030 00:03:11.763929 2455 kubelet.go:387] "Adding apiserver pod source" Oct 30 00:03:11.764020 kubelet[2455]: I1030 00:03:11.763983 2455 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:03:11.770628 kubelet[2455]: E1030 00:03:11.768651 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:03:11.770628 kubelet[2455]: E1030 00:03:11.769229 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:03:11.771264 kubelet[2455]: I1030 00:03:11.771209 2455 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:03:11.772139 kubelet[2455]: I1030 00:03:11.772096 2455 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:03:11.772139 kubelet[2455]: I1030 00:03:11.772132 2455 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 00:03:11.772242 kubelet[2455]: W1030 00:03:11.772233 2455 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 30 00:03:11.778332 kubelet[2455]: I1030 00:03:11.778050 2455 server.go:1262] "Started kubelet" Oct 30 00:03:11.778332 kubelet[2455]: I1030 00:03:11.778280 2455 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:03:11.778418 kubelet[2455]: I1030 00:03:11.778347 2455 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 00:03:11.778646 kubelet[2455]: I1030 00:03:11.778585 2455 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:03:11.779180 kubelet[2455]: I1030 00:03:11.779147 2455 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:03:11.779263 kubelet[2455]: I1030 00:03:11.779201 2455 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:03:11.782757 kubelet[2455]: I1030 00:03:11.782270 2455 server.go:310] "Adding debug handlers to kubelet server" Oct 30 00:03:11.783113 kubelet[2455]: I1030 00:03:11.783076 2455 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:03:11.784782 kubelet[2455]: E1030 00:03:11.784180 2455 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 30 00:03:11.784782 kubelet[2455]: I1030 00:03:11.784224 2455 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 00:03:11.784782 kubelet[2455]: I1030 00:03:11.784370 2455 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 00:03:11.784782 kubelet[2455]: I1030 00:03:11.784458 2455 reconciler.go:29] "Reconciler: start to sync state" Oct 30 00:03:11.784782 kubelet[2455]: E1030 00:03:11.784774 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:03:11.785493 kubelet[2455]: E1030 00:03:11.784988 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Oct 30 00:03:11.785821 kubelet[2455]: E1030 00:03:11.783478 2455 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18731beed26501f9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-30 00:03:11.777997305 +0000 UTC m=+1.802120602,LastTimestamp:2025-10-30 00:03:11.777997305 +0000 UTC m=+1.802120602,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 30 00:03:11.787487 kubelet[2455]: I1030 00:03:11.787029 2455 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:03:11.787487 kubelet[2455]: I1030 00:03:11.787046 2455 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:03:11.787487 kubelet[2455]: I1030 00:03:11.787113 2455 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:03:11.787825 kubelet[2455]: E1030 00:03:11.787769 2455 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:03:11.810001 kubelet[2455]: I1030 00:03:11.809848 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 00:03:11.812296 kubelet[2455]: I1030 00:03:11.812252 2455 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:03:11.812296 kubelet[2455]: I1030 00:03:11.812280 2455 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:03:11.812459 kubelet[2455]: I1030 00:03:11.812307 2455 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:03:11.816455 kubelet[2455]: I1030 00:03:11.816422 2455 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 00:03:11.817142 kubelet[2455]: I1030 00:03:11.817124 2455 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 00:03:11.817336 kubelet[2455]: I1030 00:03:11.817321 2455 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 00:03:11.817968 kubelet[2455]: E1030 00:03:11.817438 2455 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:03:11.825229 kubelet[2455]: I1030 00:03:11.825180 2455 policy_none.go:49] "None policy: Start" Oct 30 00:03:11.825360 kubelet[2455]: I1030 00:03:11.825254 2455 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 00:03:11.825360 kubelet[2455]: I1030 00:03:11.825281 2455 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 00:03:11.826386 kubelet[2455]: E1030 00:03:11.826278 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:03:11.831016 kubelet[2455]: I1030 00:03:11.829409 2455 policy_none.go:47] "Start" Oct 30 00:03:11.836340 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 30 00:03:11.854724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 30 00:03:11.859434 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 30 00:03:11.870581 kubelet[2455]: E1030 00:03:11.870518 2455 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:03:11.870581 kubelet[2455]: I1030 00:03:11.870926 2455 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:03:11.870581 kubelet[2455]: I1030 00:03:11.870968 2455 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:03:11.870581 kubelet[2455]: I1030 00:03:11.871298 2455 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:03:11.875034 kubelet[2455]: E1030 00:03:11.873981 2455 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:03:11.875034 kubelet[2455]: E1030 00:03:11.874027 2455 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 30 00:03:11.938916 systemd[1]: Created slice kubepods-burstable-pod52aa42f49d00707cfc4893f689a00cc8.slice - libcontainer container kubepods-burstable-pod52aa42f49d00707cfc4893f689a00cc8.slice. Oct 30 00:03:11.961876 kubelet[2455]: E1030 00:03:11.961794 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:11.966118 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Oct 30 00:03:11.972946 kubelet[2455]: I1030 00:03:11.972905 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:11.973439 kubelet[2455]: E1030 00:03:11.973396 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Oct 30 00:03:11.975614 kubelet[2455]: E1030 00:03:11.975552 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:11.980651 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Oct 30 00:03:11.983171 kubelet[2455]: E1030 00:03:11.983134 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:11.985646 kubelet[2455]: E1030 00:03:11.985516 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Oct 30 00:03:12.085843 kubelet[2455]: I1030 00:03:12.085762 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:12.085843 kubelet[2455]: I1030 00:03:12.085824 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:12.085843 kubelet[2455]: I1030 00:03:12.085853 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:12.086113 kubelet[2455]: I1030 00:03:12.085881 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:12.086113 kubelet[2455]: I1030 00:03:12.085906 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:12.086113 kubelet[2455]: I1030 00:03:12.085927 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:12.086113 kubelet[2455]: I1030 00:03:12.085994 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:12.086113 kubelet[2455]: I1030 00:03:12.086071 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:12.086225 kubelet[2455]: I1030 00:03:12.086120 2455 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:12.175658 kubelet[2455]: I1030 00:03:12.175589 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:12.176195 kubelet[2455]: E1030 00:03:12.176132 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Oct 30 00:03:12.329916 kubelet[2455]: E1030 00:03:12.329842 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:12.330886 containerd[1633]: time="2025-10-30T00:03:12.330848018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:52aa42f49d00707cfc4893f689a00cc8,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:12.359174 kubelet[2455]: E1030 00:03:12.358641 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:12.359830 containerd[1633]: time="2025-10-30T00:03:12.359413651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:12.361504 kubelet[2455]: E1030 00:03:12.361226 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:12.361701 containerd[1633]: time="2025-10-30T00:03:12.361664680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:12.386752 kubelet[2455]: E1030 00:03:12.386666 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Oct 30 00:03:12.577768 kubelet[2455]: I1030 00:03:12.577719 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:12.578119 kubelet[2455]: E1030 00:03:12.578081 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Oct 30 00:03:12.619429 kubelet[2455]: E1030 00:03:12.619251 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Oct 30 00:03:12.729643 kubelet[2455]: E1030 00:03:12.729546 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Oct 30 00:03:12.839574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2480577630.mount: Deactivated successfully. Oct 30 00:03:12.849186 containerd[1633]: time="2025-10-30T00:03:12.849122178Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:03:12.852384 containerd[1633]: time="2025-10-30T00:03:12.852293479Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Oct 30 00:03:12.853432 containerd[1633]: time="2025-10-30T00:03:12.853384341Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:03:12.855407 containerd[1633]: time="2025-10-30T00:03:12.855354164Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:03:12.856479 containerd[1633]: time="2025-10-30T00:03:12.856433580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 00:03:12.857412 containerd[1633]: time="2025-10-30T00:03:12.857363734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:03:12.858339 containerd[1633]: time="2025-10-30T00:03:12.858312740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Oct 30 00:03:12.859356 containerd[1633]: time="2025-10-30T00:03:12.859286101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 30 00:03:12.859969 containerd[1633]: time="2025-10-30T00:03:12.859931801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 492.343643ms" Oct 30 00:03:12.863078 containerd[1633]: time="2025-10-30T00:03:12.863041063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.808538ms" Oct 30 00:03:12.863742 containerd[1633]: time="2025-10-30T00:03:12.863701025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 503.975197ms" Oct 30 00:03:12.908257 containerd[1633]: time="2025-10-30T00:03:12.908036950Z" level=info msg="connecting to shim 5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7" address="unix:///run/containerd/s/299ac7fb444b64b7b5e664ec83984df736ff854802aaaa93ddeca7e4a556c09d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:12.925252 kubelet[2455]: E1030 00:03:12.925146 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Oct 30 00:03:12.928679 kubelet[2455]: E1030 00:03:12.928626 2455 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Oct 30 00:03:12.929922 containerd[1633]: time="2025-10-30T00:03:12.929871574Z" level=info msg="connecting to shim 2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4" address="unix:///run/containerd/s/6b374cbe18e9582b95ca9998bc4d7ac312667478bec5045dd60660bc35fee48e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:12.932070 containerd[1633]: time="2025-10-30T00:03:12.932015284Z" level=info msg="connecting to shim 614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c" address="unix:///run/containerd/s/881304c0eb3eff70ce3b2ba1c5c241f49447f5e1303ef16c165384cbed1592b2" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:12.974769 systemd[1]: Started cri-containerd-5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7.scope - libcontainer container 5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7. Oct 30 00:03:12.978455 systemd[1]: Started cri-containerd-2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4.scope - libcontainer container 2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4. Oct 30 00:03:13.002941 systemd[1]: Started cri-containerd-614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c.scope - libcontainer container 614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c. Oct 30 00:03:13.171491 containerd[1633]: time="2025-10-30T00:03:13.171215542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7\"" Oct 30 00:03:13.173042 kubelet[2455]: E1030 00:03:13.173001 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:13.188281 kubelet[2455]: E1030 00:03:13.188206 2455 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="1.6s" Oct 30 00:03:13.270509 containerd[1633]: time="2025-10-30T00:03:13.270072914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4\"" Oct 30 00:03:13.273993 kubelet[2455]: E1030 00:03:13.273931 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:13.362053 containerd[1633]: time="2025-10-30T00:03:13.361976556Z" level=info msg="CreateContainer within sandbox \"5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 30 00:03:13.380838 kubelet[2455]: I1030 00:03:13.380776 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:13.381316 kubelet[2455]: E1030 00:03:13.381270 2455 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Oct 30 00:03:13.395896 containerd[1633]: time="2025-10-30T00:03:13.395832836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:52aa42f49d00707cfc4893f689a00cc8,Namespace:kube-system,Attempt:0,} returns sandbox id \"614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c\"" Oct 30 00:03:13.397067 kubelet[2455]: E1030 00:03:13.397011 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:13.398533 containerd[1633]: time="2025-10-30T00:03:13.398483437Z" level=info msg="CreateContainer within sandbox \"2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 30 00:03:13.404202 containerd[1633]: time="2025-10-30T00:03:13.404117494Z" level=info msg="CreateContainer within sandbox \"614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 30 00:03:13.413593 containerd[1633]: time="2025-10-30T00:03:13.413513106Z" level=info msg="Container c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:13.415697 containerd[1633]: time="2025-10-30T00:03:13.415632374Z" level=info msg="Container df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:13.418526 containerd[1633]: time="2025-10-30T00:03:13.418478858Z" level=info msg="Container 020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:13.428889 containerd[1633]: time="2025-10-30T00:03:13.428718423Z" level=info msg="CreateContainer within sandbox \"2bb25823dc9eb2846a12437574dc7ef60187cfda3d34007e5a3a11e68eefd2d4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4\"" Oct 30 00:03:13.429490 containerd[1633]: time="2025-10-30T00:03:13.429431857Z" level=info msg="StartContainer for \"df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4\"" Oct 30 00:03:13.431540 containerd[1633]: time="2025-10-30T00:03:13.431478235Z" level=info msg="connecting to shim df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4" address="unix:///run/containerd/s/6b374cbe18e9582b95ca9998bc4d7ac312667478bec5045dd60660bc35fee48e" protocol=ttrpc version=3 Oct 30 00:03:13.432545 containerd[1633]: time="2025-10-30T00:03:13.432495100Z" level=info msg="CreateContainer within sandbox \"5b468c61e23830fdc149130b2f47c6338825743b4944fbd6768cbe7d0f7ccae7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155\"" Oct 30 00:03:13.432961 containerd[1633]: time="2025-10-30T00:03:13.432933167Z" level=info msg="StartContainer for \"c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155\"" Oct 30 00:03:13.434142 containerd[1633]: time="2025-10-30T00:03:13.434092716Z" level=info msg="connecting to shim c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155" address="unix:///run/containerd/s/299ac7fb444b64b7b5e664ec83984df736ff854802aaaa93ddeca7e4a556c09d" protocol=ttrpc version=3 Oct 30 00:03:13.439133 containerd[1633]: time="2025-10-30T00:03:13.439068662Z" level=info msg="CreateContainer within sandbox \"614e5e3e486dc8ec5623afe82b9188df4e08fb88402ee280000bfc3a18f3b88c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0\"" Oct 30 00:03:13.440129 containerd[1633]: time="2025-10-30T00:03:13.440092882Z" level=info msg="StartContainer for \"020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0\"" Oct 30 00:03:13.441682 containerd[1633]: time="2025-10-30T00:03:13.441648055Z" level=info msg="connecting to shim 020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0" address="unix:///run/containerd/s/881304c0eb3eff70ce3b2ba1c5c241f49447f5e1303ef16c165384cbed1592b2" protocol=ttrpc version=3 Oct 30 00:03:13.466890 systemd[1]: Started cri-containerd-c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155.scope - libcontainer container c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155. Oct 30 00:03:13.468758 systemd[1]: Started cri-containerd-df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4.scope - libcontainer container df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4. Oct 30 00:03:13.475044 systemd[1]: Started cri-containerd-020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0.scope - libcontainer container 020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0. Oct 30 00:03:13.575188 containerd[1633]: time="2025-10-30T00:03:13.575117726Z" level=info msg="StartContainer for \"df1c1baf27802844f37884e371777dab055104b8bc2e5e6ae49f1a2bdc6969d4\" returns successfully" Oct 30 00:03:13.579324 containerd[1633]: time="2025-10-30T00:03:13.579092942Z" level=info msg="StartContainer for \"020b5133e77330b6a98300a2cb865ef7c78719f93730f367358994aa2e1cacb0\" returns successfully" Oct 30 00:03:13.583388 containerd[1633]: time="2025-10-30T00:03:13.583351964Z" level=info msg="StartContainer for \"c512cb77dcac56fa65e917df32a088a10dc6f47e23ba879bead3c599ef19b155\" returns successfully" Oct 30 00:03:13.843028 kubelet[2455]: E1030 00:03:13.842967 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:13.843212 kubelet[2455]: E1030 00:03:13.843184 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:13.846241 kubelet[2455]: E1030 00:03:13.846202 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:13.846397 kubelet[2455]: E1030 00:03:13.846369 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:13.850623 kubelet[2455]: E1030 00:03:13.849859 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:13.850623 kubelet[2455]: E1030 00:03:13.850029 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:14.885644 kubelet[2455]: E1030 00:03:14.884705 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:14.885644 kubelet[2455]: E1030 00:03:14.884866 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:14.892508 kubelet[2455]: E1030 00:03:14.891850 2455 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 30 00:03:14.892508 kubelet[2455]: E1030 00:03:14.892139 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:14.984366 kubelet[2455]: I1030 00:03:14.984313 2455 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:15.916142 update_engine[1616]: I20251030 00:03:15.909039 1616 update_attempter.cc:509] Updating boot flags... Oct 30 00:03:16.274340 kubelet[2455]: E1030 00:03:16.274297 2455 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 30 00:03:16.501750 kubelet[2455]: I1030 00:03:16.501653 2455 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:03:16.501750 kubelet[2455]: E1030 00:03:16.501738 2455 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Oct 30 00:03:16.588701 kubelet[2455]: I1030 00:03:16.587174 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:16.808724 kubelet[2455]: I1030 00:03:16.808662 2455 apiserver.go:52] "Watching apiserver" Oct 30 00:03:16.823910 kubelet[2455]: E1030 00:03:16.823761 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:16.823910 kubelet[2455]: I1030 00:03:16.823845 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:16.826114 kubelet[2455]: E1030 00:03:16.826072 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:16.826257 kubelet[2455]: I1030 00:03:16.826223 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:16.827864 kubelet[2455]: E1030 00:03:16.827826 2455 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:16.884944 kubelet[2455]: I1030 00:03:16.884787 2455 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 00:03:18.304698 kubelet[2455]: I1030 00:03:18.304624 2455 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:18.316016 kubelet[2455]: E1030 00:03:18.315849 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:18.890791 kubelet[2455]: E1030 00:03:18.890740 2455 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:18.942388 systemd[1]: Reload requested from client PID 2768 ('systemctl') (unit session-9.scope)... Oct 30 00:03:18.942410 systemd[1]: Reloading... Oct 30 00:03:19.041656 zram_generator::config[2813]: No configuration found. Oct 30 00:03:19.297054 systemd[1]: Reloading finished in 354 ms. Oct 30 00:03:19.329174 kubelet[2455]: I1030 00:03:19.329103 2455 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:03:19.329183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:03:19.352741 systemd[1]: kubelet.service: Deactivated successfully. Oct 30 00:03:19.353201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:19.353285 systemd[1]: kubelet.service: Consumed 1.451s CPU time, 127.4M memory peak. Oct 30 00:03:19.355907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 30 00:03:19.624941 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 30 00:03:19.638244 (kubelet)[2857]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 30 00:03:19.681013 kubelet[2857]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 30 00:03:19.681013 kubelet[2857]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 30 00:03:19.681449 kubelet[2857]: I1030 00:03:19.681055 2857 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 30 00:03:19.689370 kubelet[2857]: I1030 00:03:19.689316 2857 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Oct 30 00:03:19.689370 kubelet[2857]: I1030 00:03:19.689349 2857 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 30 00:03:19.689590 kubelet[2857]: I1030 00:03:19.689386 2857 watchdog_linux.go:95] "Systemd watchdog is not enabled" Oct 30 00:03:19.689590 kubelet[2857]: I1030 00:03:19.689399 2857 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 30 00:03:19.689653 kubelet[2857]: I1030 00:03:19.689632 2857 server.go:956] "Client rotation is on, will bootstrap in background" Oct 30 00:03:19.690870 kubelet[2857]: I1030 00:03:19.690839 2857 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Oct 30 00:03:19.693135 kubelet[2857]: I1030 00:03:19.693090 2857 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 30 00:03:19.697144 kubelet[2857]: I1030 00:03:19.697094 2857 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 30 00:03:19.704282 kubelet[2857]: I1030 00:03:19.703135 2857 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Oct 30 00:03:19.707746 kubelet[2857]: I1030 00:03:19.707682 2857 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 30 00:03:19.707903 kubelet[2857]: I1030 00:03:19.707735 2857 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 30 00:03:19.707992 kubelet[2857]: I1030 00:03:19.707906 2857 topology_manager.go:138] "Creating topology manager with none policy" Oct 30 00:03:19.707992 kubelet[2857]: I1030 00:03:19.707917 2857 container_manager_linux.go:306] "Creating device plugin manager" Oct 30 00:03:19.707992 kubelet[2857]: I1030 00:03:19.707949 2857 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Oct 30 00:03:19.708791 kubelet[2857]: I1030 00:03:19.708763 2857 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:03:19.708976 kubelet[2857]: I1030 00:03:19.708946 2857 kubelet.go:475] "Attempting to sync node with API server" Oct 30 00:03:19.708976 kubelet[2857]: I1030 00:03:19.708960 2857 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 30 00:03:19.709067 kubelet[2857]: I1030 00:03:19.708985 2857 kubelet.go:387] "Adding apiserver pod source" Oct 30 00:03:19.709067 kubelet[2857]: I1030 00:03:19.709028 2857 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 30 00:03:19.710055 kubelet[2857]: I1030 00:03:19.709978 2857 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Oct 30 00:03:19.710495 kubelet[2857]: I1030 00:03:19.710464 2857 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Oct 30 00:03:19.710495 kubelet[2857]: I1030 00:03:19.710494 2857 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Oct 30 00:03:19.713398 kubelet[2857]: I1030 00:03:19.712894 2857 server.go:1262] "Started kubelet" Oct 30 00:03:19.713398 kubelet[2857]: I1030 00:03:19.713181 2857 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 30 00:03:19.713398 kubelet[2857]: I1030 00:03:19.713239 2857 server_v1.go:49] "podresources" method="list" useActivePods=true Oct 30 00:03:19.714701 kubelet[2857]: I1030 00:03:19.713466 2857 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 30 00:03:19.714701 kubelet[2857]: I1030 00:03:19.713505 2857 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Oct 30 00:03:19.714701 kubelet[2857]: I1030 00:03:19.713591 2857 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 30 00:03:19.714907 kubelet[2857]: I1030 00:03:19.714884 2857 server.go:310] "Adding debug handlers to kubelet server" Oct 30 00:03:19.715474 kubelet[2857]: I1030 00:03:19.715445 2857 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 30 00:03:19.721878 kubelet[2857]: E1030 00:03:19.721833 2857 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 30 00:03:19.724204 kubelet[2857]: I1030 00:03:19.723844 2857 volume_manager.go:313] "Starting Kubelet Volume Manager" Oct 30 00:03:19.724204 kubelet[2857]: I1030 00:03:19.723934 2857 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 30 00:03:19.724204 kubelet[2857]: I1030 00:03:19.724071 2857 reconciler.go:29] "Reconciler: start to sync state" Oct 30 00:03:19.725629 kubelet[2857]: I1030 00:03:19.725259 2857 factory.go:223] Registration of the systemd container factory successfully Oct 30 00:03:19.726104 kubelet[2857]: I1030 00:03:19.725384 2857 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 30 00:03:19.729319 kubelet[2857]: I1030 00:03:19.729294 2857 factory.go:223] Registration of the containerd container factory successfully Oct 30 00:03:19.733890 kubelet[2857]: I1030 00:03:19.733830 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Oct 30 00:03:19.735693 kubelet[2857]: I1030 00:03:19.735411 2857 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Oct 30 00:03:19.735693 kubelet[2857]: I1030 00:03:19.735432 2857 status_manager.go:244] "Starting to sync pod status with apiserver" Oct 30 00:03:19.735693 kubelet[2857]: I1030 00:03:19.735456 2857 kubelet.go:2427] "Starting kubelet main sync loop" Oct 30 00:03:19.735693 kubelet[2857]: E1030 00:03:19.735503 2857 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 30 00:03:19.804819 kubelet[2857]: I1030 00:03:19.804787 2857 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 30 00:03:19.804819 kubelet[2857]: I1030 00:03:19.804805 2857 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 30 00:03:19.804819 kubelet[2857]: I1030 00:03:19.804824 2857 state_mem.go:36] "Initialized new in-memory state store" Oct 30 00:03:19.805009 kubelet[2857]: I1030 00:03:19.804976 2857 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 30 00:03:19.805009 kubelet[2857]: I1030 00:03:19.804986 2857 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 30 00:03:19.805009 kubelet[2857]: I1030 00:03:19.805007 2857 policy_none.go:49] "None policy: Start" Oct 30 00:03:19.805086 kubelet[2857]: I1030 00:03:19.805017 2857 memory_manager.go:187] "Starting memorymanager" policy="None" Oct 30 00:03:19.805086 kubelet[2857]: I1030 00:03:19.805038 2857 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Oct 30 00:03:19.805144 kubelet[2857]: I1030 00:03:19.805130 2857 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Oct 30 00:03:19.805144 kubelet[2857]: I1030 00:03:19.805143 2857 policy_none.go:47] "Start" Oct 30 00:03:19.812065 kubelet[2857]: E1030 00:03:19.812035 2857 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Oct 30 00:03:19.812262 kubelet[2857]: I1030 00:03:19.812246 2857 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 30 00:03:19.812304 kubelet[2857]: I1030 00:03:19.812264 2857 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 30 00:03:19.813190 kubelet[2857]: I1030 00:03:19.813159 2857 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 30 00:03:19.816647 kubelet[2857]: E1030 00:03:19.815156 2857 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 30 00:03:19.836173 kubelet[2857]: I1030 00:03:19.836015 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.836173 kubelet[2857]: I1030 00:03:19.836077 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:19.836173 kubelet[2857]: I1030 00:03:19.836140 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:19.918162 kubelet[2857]: I1030 00:03:19.918022 2857 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 30 00:03:19.927697 kubelet[2857]: I1030 00:03:19.927655 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.927697 kubelet[2857]: I1030 00:03:19.927693 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.927697 kubelet[2857]: I1030 00:03:19.927711 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.927897 kubelet[2857]: I1030 00:03:19.927730 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.927897 kubelet[2857]: I1030 00:03:19.927752 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:19.927897 kubelet[2857]: I1030 00:03:19.927799 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:19.927897 kubelet[2857]: I1030 00:03:19.927821 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:19.927897 kubelet[2857]: I1030 00:03:19.927841 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:19.928320 kubelet[2857]: I1030 00:03:19.928291 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52aa42f49d00707cfc4893f689a00cc8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"52aa42f49d00707cfc4893f689a00cc8\") " pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:20.067661 kubelet[2857]: E1030 00:03:20.064725 2857 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:20.067661 kubelet[2857]: E1030 00:03:20.065024 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.091672 kubelet[2857]: I1030 00:03:20.091615 2857 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 30 00:03:20.091909 kubelet[2857]: I1030 00:03:20.091729 2857 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 30 00:03:20.310560 kubelet[2857]: E1030 00:03:20.310489 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.310780 kubelet[2857]: E1030 00:03:20.310626 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.710224 kubelet[2857]: I1030 00:03:20.710060 2857 apiserver.go:52] "Watching apiserver" Oct 30 00:03:20.732078 kubelet[2857]: I1030 00:03:20.732027 2857 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 30 00:03:20.753685 kubelet[2857]: I1030 00:03:20.752840 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:20.753685 kubelet[2857]: I1030 00:03:20.753034 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:20.753685 kubelet[2857]: I1030 00:03:20.753239 2857 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.884139 2857 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.884193 2857 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.884397 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.884430 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.884543 2857 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Oct 30 00:03:20.896195 kubelet[2857]: E1030 00:03:20.886107 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:20.935645 kubelet[2857]: I1030 00:03:20.934836 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9347985269999999 podStartE2EDuration="1.934798527s" podCreationTimestamp="2025-10-30 00:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:20.885197037 +0000 UTC m=+1.241706814" watchObservedRunningTime="2025-10-30 00:03:20.934798527 +0000 UTC m=+1.291308294" Oct 30 00:03:20.956207 kubelet[2857]: I1030 00:03:20.956140 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.956120522 podStartE2EDuration="1.956120522s" podCreationTimestamp="2025-10-30 00:03:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:20.954865584 +0000 UTC m=+1.311375351" watchObservedRunningTime="2025-10-30 00:03:20.956120522 +0000 UTC m=+1.312630289" Oct 30 00:03:20.956432 kubelet[2857]: I1030 00:03:20.956280 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.956275876 podStartE2EDuration="2.956275876s" podCreationTimestamp="2025-10-30 00:03:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:20.941654725 +0000 UTC m=+1.298164522" watchObservedRunningTime="2025-10-30 00:03:20.956275876 +0000 UTC m=+1.312785643" Oct 30 00:03:21.754542 kubelet[2857]: E1030 00:03:21.754493 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:21.755146 kubelet[2857]: E1030 00:03:21.754722 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:21.755146 kubelet[2857]: E1030 00:03:21.754722 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:22.757299 kubelet[2857]: E1030 00:03:22.757243 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:23.509314 kubelet[2857]: I1030 00:03:23.509234 2857 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 30 00:03:23.509720 containerd[1633]: time="2025-10-30T00:03:23.509682490Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 30 00:03:23.510152 kubelet[2857]: I1030 00:03:23.509944 2857 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 30 00:03:23.885390 kubelet[2857]: E1030 00:03:23.884841 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.149183 kubelet[2857]: E1030 00:03:24.148998 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.512471 systemd[1]: Created slice kubepods-besteffort-pod19dfc4ed_1445_4241_a258_0c16707ae4cc.slice - libcontainer container kubepods-besteffort-pod19dfc4ed_1445_4241_a258_0c16707ae4cc.slice. Oct 30 00:03:24.656484 kubelet[2857]: I1030 00:03:24.656428 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19dfc4ed-1445-4241-a258-0c16707ae4cc-xtables-lock\") pod \"kube-proxy-gg7zm\" (UID: \"19dfc4ed-1445-4241-a258-0c16707ae4cc\") " pod="kube-system/kube-proxy-gg7zm" Oct 30 00:03:24.656723 kubelet[2857]: I1030 00:03:24.656495 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/19dfc4ed-1445-4241-a258-0c16707ae4cc-kube-proxy\") pod \"kube-proxy-gg7zm\" (UID: \"19dfc4ed-1445-4241-a258-0c16707ae4cc\") " pod="kube-system/kube-proxy-gg7zm" Oct 30 00:03:24.656723 kubelet[2857]: I1030 00:03:24.656531 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19dfc4ed-1445-4241-a258-0c16707ae4cc-lib-modules\") pod \"kube-proxy-gg7zm\" (UID: \"19dfc4ed-1445-4241-a258-0c16707ae4cc\") " pod="kube-system/kube-proxy-gg7zm" Oct 30 00:03:24.656723 kubelet[2857]: I1030 00:03:24.656553 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btppn\" (UniqueName: \"kubernetes.io/projected/19dfc4ed-1445-4241-a258-0c16707ae4cc-kube-api-access-btppn\") pod \"kube-proxy-gg7zm\" (UID: \"19dfc4ed-1445-4241-a258-0c16707ae4cc\") " pod="kube-system/kube-proxy-gg7zm" Oct 30 00:03:24.710140 systemd[1]: Created slice kubepods-besteffort-podcd8d25f8_a692_4d8a_bde8_f166607b7924.slice - libcontainer container kubepods-besteffort-podcd8d25f8_a692_4d8a_bde8_f166607b7924.slice. Oct 30 00:03:24.762729 kubelet[2857]: E1030 00:03:24.762683 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.763960 kubelet[2857]: E1030 00:03:24.762968 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.847113 kubelet[2857]: E1030 00:03:24.847065 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.853743 containerd[1633]: time="2025-10-30T00:03:24.853695740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gg7zm,Uid:19dfc4ed-1445-4241-a258-0c16707ae4cc,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:24.858074 kubelet[2857]: I1030 00:03:24.858041 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwd8\" (UniqueName: \"kubernetes.io/projected/cd8d25f8-a692-4d8a-bde8-f166607b7924-kube-api-access-2jwd8\") pod \"tigera-operator-65cdcdfd6d-5gb4t\" (UID: \"cd8d25f8-a692-4d8a-bde8-f166607b7924\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5gb4t" Oct 30 00:03:24.858192 kubelet[2857]: I1030 00:03:24.858096 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cd8d25f8-a692-4d8a-bde8-f166607b7924-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-5gb4t\" (UID: \"cd8d25f8-a692-4d8a-bde8-f166607b7924\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-5gb4t" Oct 30 00:03:24.883081 containerd[1633]: time="2025-10-30T00:03:24.883008180Z" level=info msg="connecting to shim 68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43" address="unix:///run/containerd/s/76b388af47e4ead8749f5adb9718eec94526b538a2255e73f56790b7c03a02e4" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:24.923831 systemd[1]: Started cri-containerd-68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43.scope - libcontainer container 68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43. Oct 30 00:03:24.954080 containerd[1633]: time="2025-10-30T00:03:24.954020043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gg7zm,Uid:19dfc4ed-1445-4241-a258-0c16707ae4cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43\"" Oct 30 00:03:24.955096 kubelet[2857]: E1030 00:03:24.955069 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:24.966528 containerd[1633]: time="2025-10-30T00:03:24.966471418Z" level=info msg="CreateContainer within sandbox \"68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 30 00:03:24.980110 containerd[1633]: time="2025-10-30T00:03:24.980058548Z" level=info msg="Container 0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:24.990489 containerd[1633]: time="2025-10-30T00:03:24.990434311Z" level=info msg="CreateContainer within sandbox \"68499ffba573291c417b01cced596616b0b0af0c890ff6f929d69a4675509a43\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce\"" Oct 30 00:03:24.991168 containerd[1633]: time="2025-10-30T00:03:24.991140161Z" level=info msg="StartContainer for \"0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce\"" Oct 30 00:03:24.992863 containerd[1633]: time="2025-10-30T00:03:24.992823372Z" level=info msg="connecting to shim 0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce" address="unix:///run/containerd/s/76b388af47e4ead8749f5adb9718eec94526b538a2255e73f56790b7c03a02e4" protocol=ttrpc version=3 Oct 30 00:03:25.017598 containerd[1633]: time="2025-10-30T00:03:25.017502784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5gb4t,Uid:cd8d25f8-a692-4d8a-bde8-f166607b7924,Namespace:tigera-operator,Attempt:0,}" Oct 30 00:03:25.031850 systemd[1]: Started cri-containerd-0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce.scope - libcontainer container 0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce. Oct 30 00:03:25.042043 containerd[1633]: time="2025-10-30T00:03:25.041968063Z" level=info msg="connecting to shim 3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73" address="unix:///run/containerd/s/1cd85e20a232e47424589a6ab6d071c6ceffe7f24230181081d72ab408ba4ced" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:25.080957 systemd[1]: Started cri-containerd-3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73.scope - libcontainer container 3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73. Oct 30 00:03:25.098578 containerd[1633]: time="2025-10-30T00:03:25.098523575Z" level=info msg="StartContainer for \"0ba58d6714b018bd710f207ad4c8116a396bb0a716d1e51872f181ed27bd3bce\" returns successfully" Oct 30 00:03:25.155220 containerd[1633]: time="2025-10-30T00:03:25.155160945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-5gb4t,Uid:cd8d25f8-a692-4d8a-bde8-f166607b7924,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73\"" Oct 30 00:03:25.158881 containerd[1633]: time="2025-10-30T00:03:25.158828475Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Oct 30 00:03:25.770346 kubelet[2857]: E1030 00:03:25.769800 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:25.777586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1271165745.mount: Deactivated successfully. Oct 30 00:03:27.389226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount160952788.mount: Deactivated successfully. Oct 30 00:03:27.775324 containerd[1633]: time="2025-10-30T00:03:27.775262784Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:27.776196 containerd[1633]: time="2025-10-30T00:03:27.776151561Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Oct 30 00:03:27.777444 containerd[1633]: time="2025-10-30T00:03:27.777407284Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:27.781345 containerd[1633]: time="2025-10-30T00:03:27.781296491Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:27.782200 containerd[1633]: time="2025-10-30T00:03:27.782125377Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.623057839s" Oct 30 00:03:27.782200 containerd[1633]: time="2025-10-30T00:03:27.782182142Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Oct 30 00:03:27.788102 containerd[1633]: time="2025-10-30T00:03:27.788030055Z" level=info msg="CreateContainer within sandbox \"3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Oct 30 00:03:27.798267 containerd[1633]: time="2025-10-30T00:03:27.798205679Z" level=info msg="Container d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:27.804433 containerd[1633]: time="2025-10-30T00:03:27.804378686Z" level=info msg="CreateContainer within sandbox \"3e52bdc1cc7449e008114859e6f4c6137c5f492650b847b7a99fd67eb0afce73\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee\"" Oct 30 00:03:27.805950 containerd[1633]: time="2025-10-30T00:03:27.805114706Z" level=info msg="StartContainer for \"d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee\"" Oct 30 00:03:27.806087 containerd[1633]: time="2025-10-30T00:03:27.806054644Z" level=info msg="connecting to shim d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee" address="unix:///run/containerd/s/1cd85e20a232e47424589a6ab6d071c6ceffe7f24230181081d72ab408ba4ced" protocol=ttrpc version=3 Oct 30 00:03:27.873916 systemd[1]: Started cri-containerd-d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee.scope - libcontainer container d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee. Oct 30 00:03:27.916557 containerd[1633]: time="2025-10-30T00:03:27.916491950Z" level=info msg="StartContainer for \"d5f841338bfbb76e2685123829df88c159568d81d54f852c99f477a2c230dcee\" returns successfully" Oct 30 00:03:28.787826 kubelet[2857]: I1030 00:03:28.787686 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gg7zm" podStartSLOduration=4.787661098 podStartE2EDuration="4.787661098s" podCreationTimestamp="2025-10-30 00:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:03:25.842552506 +0000 UTC m=+6.199062293" watchObservedRunningTime="2025-10-30 00:03:28.787661098 +0000 UTC m=+9.144170875" Oct 30 00:03:31.384301 kubelet[2857]: E1030 00:03:31.384247 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:31.954441 kubelet[2857]: I1030 00:03:31.954072 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-5gb4t" podStartSLOduration=5.328207753 podStartE2EDuration="7.954053029s" podCreationTimestamp="2025-10-30 00:03:24 +0000 UTC" firstStartedPulling="2025-10-30 00:03:25.157178056 +0000 UTC m=+5.513687823" lastFinishedPulling="2025-10-30 00:03:27.783023332 +0000 UTC m=+8.139533099" observedRunningTime="2025-10-30 00:03:28.788497562 +0000 UTC m=+9.145007329" watchObservedRunningTime="2025-10-30 00:03:31.954053029 +0000 UTC m=+12.310562796" Oct 30 00:03:37.294731 sudo[1885]: pam_unix(sudo:session): session closed for user root Oct 30 00:03:37.302677 sshd[1884]: Connection closed by 10.0.0.1 port 49712 Oct 30 00:03:37.306329 sshd-session[1881]: pam_unix(sshd:session): session closed for user core Oct 30 00:03:37.318313 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:49712.service: Deactivated successfully. Oct 30 00:03:37.325805 systemd[1]: session-9.scope: Deactivated successfully. Oct 30 00:03:37.326389 systemd[1]: session-9.scope: Consumed 5.520s CPU time, 222.9M memory peak. Oct 30 00:03:37.330833 systemd-logind[1613]: Session 9 logged out. Waiting for processes to exit. Oct 30 00:03:37.335000 systemd-logind[1613]: Removed session 9. Oct 30 00:03:42.844543 systemd[1]: Created slice kubepods-besteffort-podb18f25b1_742e_4609_bcbd_137250110488.slice - libcontainer container kubepods-besteffort-podb18f25b1_742e_4609_bcbd_137250110488.slice. Oct 30 00:03:42.889642 kubelet[2857]: I1030 00:03:42.889507 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6w72\" (UniqueName: \"kubernetes.io/projected/b18f25b1-742e-4609-bcbd-137250110488-kube-api-access-h6w72\") pod \"calico-typha-5c6bbf44c5-zxk29\" (UID: \"b18f25b1-742e-4609-bcbd-137250110488\") " pod="calico-system/calico-typha-5c6bbf44c5-zxk29" Oct 30 00:03:42.889642 kubelet[2857]: I1030 00:03:42.889577 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b18f25b1-742e-4609-bcbd-137250110488-tigera-ca-bundle\") pod \"calico-typha-5c6bbf44c5-zxk29\" (UID: \"b18f25b1-742e-4609-bcbd-137250110488\") " pod="calico-system/calico-typha-5c6bbf44c5-zxk29" Oct 30 00:03:42.889642 kubelet[2857]: I1030 00:03:42.889645 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b18f25b1-742e-4609-bcbd-137250110488-typha-certs\") pod \"calico-typha-5c6bbf44c5-zxk29\" (UID: \"b18f25b1-742e-4609-bcbd-137250110488\") " pod="calico-system/calico-typha-5c6bbf44c5-zxk29" Oct 30 00:03:43.156109 systemd[1]: Created slice kubepods-besteffort-podec87d44a_fdcf_4560_be97_f463130e1d33.slice - libcontainer container kubepods-besteffort-podec87d44a_fdcf_4560_be97_f463130e1d33.slice. Oct 30 00:03:43.157732 kubelet[2857]: E1030 00:03:43.157690 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:43.159102 containerd[1633]: time="2025-10-30T00:03:43.159064511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6bbf44c5-zxk29,Uid:b18f25b1-742e-4609-bcbd-137250110488,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:43.191166 kubelet[2857]: I1030 00:03:43.191120 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-var-run-calico\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191368 kubelet[2857]: I1030 00:03:43.191196 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-policysync\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191368 kubelet[2857]: I1030 00:03:43.191221 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-xtables-lock\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191368 kubelet[2857]: I1030 00:03:43.191238 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-cni-bin-dir\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191368 kubelet[2857]: I1030 00:03:43.191278 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-flexvol-driver-host\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191368 kubelet[2857]: I1030 00:03:43.191313 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-cni-net-dir\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191504 kubelet[2857]: I1030 00:03:43.191329 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-lib-modules\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191504 kubelet[2857]: I1030 00:03:43.191377 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ec87d44a-fdcf-4560-be97-f463130e1d33-node-certs\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191504 kubelet[2857]: I1030 00:03:43.191395 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec87d44a-fdcf-4560-be97-f463130e1d33-tigera-ca-bundle\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191504 kubelet[2857]: I1030 00:03:43.191410 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-var-lib-calico\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191504 kubelet[2857]: I1030 00:03:43.191449 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ec87d44a-fdcf-4560-be97-f463130e1d33-cni-log-dir\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.191715 kubelet[2857]: I1030 00:03:43.191465 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkvp9\" (UniqueName: \"kubernetes.io/projected/ec87d44a-fdcf-4560-be97-f463130e1d33-kube-api-access-qkvp9\") pod \"calico-node-dxnbl\" (UID: \"ec87d44a-fdcf-4560-be97-f463130e1d33\") " pod="calico-system/calico-node-dxnbl" Oct 30 00:03:43.221254 containerd[1633]: time="2025-10-30T00:03:43.221185562Z" level=info msg="connecting to shim d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2" address="unix:///run/containerd/s/76afe0d6c6a9a0ed04d87d9865597a4dc829c71a14e0a75b81cce93a3c8e7082" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:43.255537 systemd[1]: Started cri-containerd-d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2.scope - libcontainer container d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2. Oct 30 00:03:43.372882 kubelet[2857]: E1030 00:03:43.372807 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:43.374845 kubelet[2857]: E1030 00:03:43.374797 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.374961 kubelet[2857]: W1030 00:03:43.374874 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.375042 kubelet[2857]: E1030 00:03:43.374914 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.375587 kubelet[2857]: E1030 00:03:43.375541 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.375587 kubelet[2857]: W1030 00:03:43.375567 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.375587 kubelet[2857]: E1030 00:03:43.375592 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.376916 kubelet[2857]: E1030 00:03:43.376876 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.376916 kubelet[2857]: W1030 00:03:43.376897 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.377251 kubelet[2857]: E1030 00:03:43.376911 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.377732 containerd[1633]: time="2025-10-30T00:03:43.377485371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c6bbf44c5-zxk29,Uid:b18f25b1-742e-4609-bcbd-137250110488,Namespace:calico-system,Attempt:0,} returns sandbox id \"d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2\"" Oct 30 00:03:43.379495 kubelet[2857]: E1030 00:03:43.379458 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:43.380568 kubelet[2857]: E1030 00:03:43.380523 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.380808 kubelet[2857]: W1030 00:03:43.380559 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.380808 kubelet[2857]: E1030 00:03:43.380626 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.381469 kubelet[2857]: E1030 00:03:43.381429 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.381469 kubelet[2857]: W1030 00:03:43.381452 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.381469 kubelet[2857]: E1030 00:03:43.381470 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.382787 kubelet[2857]: E1030 00:03:43.382448 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.383320 kubelet[2857]: W1030 00:03:43.382789 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.383320 kubelet[2857]: E1030 00:03:43.383088 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.383561 containerd[1633]: time="2025-10-30T00:03:43.382799324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Oct 30 00:03:43.383784 kubelet[2857]: E1030 00:03:43.383703 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.383784 kubelet[2857]: W1030 00:03:43.383727 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.383784 kubelet[2857]: E1030 00:03:43.383746 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.384228 kubelet[2857]: E1030 00:03:43.384004 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.384228 kubelet[2857]: W1030 00:03:43.384049 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.384228 kubelet[2857]: E1030 00:03:43.384072 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.384693 kubelet[2857]: E1030 00:03:43.384385 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.384693 kubelet[2857]: W1030 00:03:43.384409 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.384693 kubelet[2857]: E1030 00:03:43.384430 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.384693 kubelet[2857]: E1030 00:03:43.384662 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.384693 kubelet[2857]: W1030 00:03:43.384682 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.385723 kubelet[2857]: E1030 00:03:43.384718 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.385723 kubelet[2857]: E1030 00:03:43.384954 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.385723 kubelet[2857]: W1030 00:03:43.384969 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.385723 kubelet[2857]: E1030 00:03:43.384984 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.385879 kubelet[2857]: E1030 00:03:43.385715 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.386233 kubelet[2857]: W1030 00:03:43.386182 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.386233 kubelet[2857]: E1030 00:03:43.386216 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.386823 kubelet[2857]: E1030 00:03:43.386803 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.386823 kubelet[2857]: W1030 00:03:43.386819 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.386823 kubelet[2857]: E1030 00:03:43.386829 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.387316 kubelet[2857]: E1030 00:03:43.387171 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.387316 kubelet[2857]: W1030 00:03:43.387183 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.387316 kubelet[2857]: E1030 00:03:43.387193 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.387843 kubelet[2857]: E1030 00:03:43.387778 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.387843 kubelet[2857]: W1030 00:03:43.387794 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.387843 kubelet[2857]: E1030 00:03:43.387804 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.388769 kubelet[2857]: E1030 00:03:43.388734 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.388769 kubelet[2857]: W1030 00:03:43.388761 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.388889 kubelet[2857]: E1030 00:03:43.388786 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.389173 kubelet[2857]: E1030 00:03:43.389139 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.389408 kubelet[2857]: W1030 00:03:43.389250 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.389628 kubelet[2857]: E1030 00:03:43.389488 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.389984 kubelet[2857]: E1030 00:03:43.389934 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.389984 kubelet[2857]: W1030 00:03:43.389947 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.389984 kubelet[2857]: E1030 00:03:43.389963 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.390308 kubelet[2857]: E1030 00:03:43.390216 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.390308 kubelet[2857]: W1030 00:03:43.390240 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.390308 kubelet[2857]: E1030 00:03:43.390250 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.390562 kubelet[2857]: E1030 00:03:43.390541 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.390562 kubelet[2857]: W1030 00:03:43.390557 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.390703 kubelet[2857]: E1030 00:03:43.390567 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.394463 kubelet[2857]: E1030 00:03:43.394241 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.394463 kubelet[2857]: W1030 00:03:43.394266 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.394463 kubelet[2857]: E1030 00:03:43.394286 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.394463 kubelet[2857]: I1030 00:03:43.394317 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtdzx\" (UniqueName: \"kubernetes.io/projected/dac688c3-f50b-4d08-95db-f1aa2487f334-kube-api-access-dtdzx\") pod \"csi-node-driver-bmzt2\" (UID: \"dac688c3-f50b-4d08-95db-f1aa2487f334\") " pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:43.394733 kubelet[2857]: E1030 00:03:43.394717 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.394797 kubelet[2857]: W1030 00:03:43.394784 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.394986 kubelet[2857]: E1030 00:03:43.394863 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.394986 kubelet[2857]: I1030 00:03:43.394882 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/dac688c3-f50b-4d08-95db-f1aa2487f334-registration-dir\") pod \"csi-node-driver-bmzt2\" (UID: \"dac688c3-f50b-4d08-95db-f1aa2487f334\") " pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:43.395150 kubelet[2857]: E1030 00:03:43.395113 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.395311 kubelet[2857]: W1030 00:03:43.395220 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.395311 kubelet[2857]: E1030 00:03:43.395234 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.395311 kubelet[2857]: I1030 00:03:43.395262 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dac688c3-f50b-4d08-95db-f1aa2487f334-kubelet-dir\") pod \"csi-node-driver-bmzt2\" (UID: \"dac688c3-f50b-4d08-95db-f1aa2487f334\") " pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:43.396273 kubelet[2857]: E1030 00:03:43.396052 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.396273 kubelet[2857]: W1030 00:03:43.396084 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.396273 kubelet[2857]: E1030 00:03:43.396098 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.397677 kubelet[2857]: E1030 00:03:43.397583 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.398223 kubelet[2857]: W1030 00:03:43.397693 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.398223 kubelet[2857]: E1030 00:03:43.397778 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.398312 kubelet[2857]: E1030 00:03:43.398274 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.398312 kubelet[2857]: W1030 00:03:43.398292 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.398417 kubelet[2857]: E1030 00:03:43.398312 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.398417 kubelet[2857]: I1030 00:03:43.398356 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/dac688c3-f50b-4d08-95db-f1aa2487f334-varrun\") pod \"csi-node-driver-bmzt2\" (UID: \"dac688c3-f50b-4d08-95db-f1aa2487f334\") " pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:43.399006 kubelet[2857]: E1030 00:03:43.398860 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.399006 kubelet[2857]: W1030 00:03:43.398930 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.399006 kubelet[2857]: E1030 00:03:43.398944 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.400029 kubelet[2857]: E1030 00:03:43.399953 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.400160 kubelet[2857]: W1030 00:03:43.400143 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.400293 kubelet[2857]: E1030 00:03:43.400264 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.400852 kubelet[2857]: E1030 00:03:43.400828 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.400852 kubelet[2857]: W1030 00:03:43.400846 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.400960 kubelet[2857]: E1030 00:03:43.400859 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.401766 kubelet[2857]: E1030 00:03:43.401744 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.401766 kubelet[2857]: W1030 00:03:43.401759 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.401969 kubelet[2857]: E1030 00:03:43.401771 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.401969 kubelet[2857]: I1030 00:03:43.401803 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/dac688c3-f50b-4d08-95db-f1aa2487f334-socket-dir\") pod \"csi-node-driver-bmzt2\" (UID: \"dac688c3-f50b-4d08-95db-f1aa2487f334\") " pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:43.402176 kubelet[2857]: E1030 00:03:43.402148 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.402224 kubelet[2857]: W1030 00:03:43.402177 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.402224 kubelet[2857]: E1030 00:03:43.402200 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.402656 kubelet[2857]: E1030 00:03:43.402500 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.402656 kubelet[2857]: W1030 00:03:43.402522 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.402656 kubelet[2857]: E1030 00:03:43.402536 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.403026 kubelet[2857]: E1030 00:03:43.403011 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.403162 kubelet[2857]: W1030 00:03:43.403085 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.403162 kubelet[2857]: E1030 00:03:43.403103 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.403659 kubelet[2857]: E1030 00:03:43.403506 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.403659 kubelet[2857]: W1030 00:03:43.403520 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.403659 kubelet[2857]: E1030 00:03:43.403532 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.404440 kubelet[2857]: E1030 00:03:43.404416 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.404440 kubelet[2857]: W1030 00:03:43.404433 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.404535 kubelet[2857]: E1030 00:03:43.404447 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.465255 kubelet[2857]: E1030 00:03:43.465122 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:43.465790 containerd[1633]: time="2025-10-30T00:03:43.465672924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxnbl,Uid:ec87d44a-fdcf-4560-be97-f463130e1d33,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:43.496645 containerd[1633]: time="2025-10-30T00:03:43.496555640Z" level=info msg="connecting to shim 7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c" address="unix:///run/containerd/s/e2bfd54f0cb0271eea66752af763f1c49c18e33837a5b9c572432850b5348a0e" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:03:43.503424 kubelet[2857]: E1030 00:03:43.503389 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.503424 kubelet[2857]: W1030 00:03:43.503415 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.503568 kubelet[2857]: E1030 00:03:43.503438 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.503699 kubelet[2857]: E1030 00:03:43.503676 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.503699 kubelet[2857]: W1030 00:03:43.503690 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.503699 kubelet[2857]: E1030 00:03:43.503701 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.503923 kubelet[2857]: E1030 00:03:43.503911 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.503923 kubelet[2857]: W1030 00:03:43.503920 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.504037 kubelet[2857]: E1030 00:03:43.503928 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.504171 kubelet[2857]: E1030 00:03:43.504159 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.504171 kubelet[2857]: W1030 00:03:43.504168 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.504272 kubelet[2857]: E1030 00:03:43.504180 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.504403 kubelet[2857]: E1030 00:03:43.504371 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.504403 kubelet[2857]: W1030 00:03:43.504395 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.504403 kubelet[2857]: E1030 00:03:43.504404 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.504661 kubelet[2857]: E1030 00:03:43.504642 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.504661 kubelet[2857]: W1030 00:03:43.504656 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.504784 kubelet[2857]: E1030 00:03:43.504667 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.504914 kubelet[2857]: E1030 00:03:43.504886 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.504914 kubelet[2857]: W1030 00:03:43.504895 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.504914 kubelet[2857]: E1030 00:03:43.504905 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.505115 kubelet[2857]: E1030 00:03:43.505088 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.505115 kubelet[2857]: W1030 00:03:43.505110 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.505225 kubelet[2857]: E1030 00:03:43.505120 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.505339 kubelet[2857]: E1030 00:03:43.505319 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.505339 kubelet[2857]: W1030 00:03:43.505333 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.505339 kubelet[2857]: E1030 00:03:43.505342 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.505699 kubelet[2857]: E1030 00:03:43.505675 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.505773 kubelet[2857]: W1030 00:03:43.505702 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.505773 kubelet[2857]: E1030 00:03:43.505715 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.505985 kubelet[2857]: E1030 00:03:43.505905 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.505985 kubelet[2857]: W1030 00:03:43.505924 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.505985 kubelet[2857]: E1030 00:03:43.505936 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.506139 kubelet[2857]: E1030 00:03:43.506121 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.506139 kubelet[2857]: W1030 00:03:43.506132 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.506219 kubelet[2857]: E1030 00:03:43.506142 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.506309 kubelet[2857]: E1030 00:03:43.506291 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.506309 kubelet[2857]: W1030 00:03:43.506302 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.506309 kubelet[2857]: E1030 00:03:43.506310 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.506492 kubelet[2857]: E1030 00:03:43.506475 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.506492 kubelet[2857]: W1030 00:03:43.506488 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.506554 kubelet[2857]: E1030 00:03:43.506497 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.506762 kubelet[2857]: E1030 00:03:43.506742 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.506762 kubelet[2857]: W1030 00:03:43.506756 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.506843 kubelet[2857]: E1030 00:03:43.506767 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.506995 kubelet[2857]: E1030 00:03:43.506970 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.506995 kubelet[2857]: W1030 00:03:43.506990 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.506995 kubelet[2857]: E1030 00:03:43.507000 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.507214 kubelet[2857]: E1030 00:03:43.507198 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.507214 kubelet[2857]: W1030 00:03:43.507209 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.507273 kubelet[2857]: E1030 00:03:43.507219 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.507484 kubelet[2857]: E1030 00:03:43.507466 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.507484 kubelet[2857]: W1030 00:03:43.507479 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.507542 kubelet[2857]: E1030 00:03:43.507489 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.507754 kubelet[2857]: E1030 00:03:43.507737 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.507754 kubelet[2857]: W1030 00:03:43.507750 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.507827 kubelet[2857]: E1030 00:03:43.507761 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.507980 kubelet[2857]: E1030 00:03:43.507962 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.507980 kubelet[2857]: W1030 00:03:43.507976 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.508046 kubelet[2857]: E1030 00:03:43.507987 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.508450 kubelet[2857]: E1030 00:03:43.508431 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.508450 kubelet[2857]: W1030 00:03:43.508446 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.508520 kubelet[2857]: E1030 00:03:43.508457 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.508780 kubelet[2857]: E1030 00:03:43.508761 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.508780 kubelet[2857]: W1030 00:03:43.508776 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.508858 kubelet[2857]: E1030 00:03:43.508789 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.509055 kubelet[2857]: E1030 00:03:43.509038 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.509055 kubelet[2857]: W1030 00:03:43.509053 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.509127 kubelet[2857]: E1030 00:03:43.509064 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.509406 kubelet[2857]: E1030 00:03:43.509367 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.509406 kubelet[2857]: W1030 00:03:43.509397 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.509490 kubelet[2857]: E1030 00:03:43.509409 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.509729 kubelet[2857]: E1030 00:03:43.509699 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.509729 kubelet[2857]: W1030 00:03:43.509726 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.509783 kubelet[2857]: E1030 00:03:43.509738 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.516924 kubelet[2857]: E1030 00:03:43.516837 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:43.516924 kubelet[2857]: W1030 00:03:43.516858 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:43.517171 kubelet[2857]: E1030 00:03:43.516986 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:43.538755 systemd[1]: Started cri-containerd-7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c.scope - libcontainer container 7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c. Oct 30 00:03:43.570080 containerd[1633]: time="2025-10-30T00:03:43.570018750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxnbl,Uid:ec87d44a-fdcf-4560-be97-f463130e1d33,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\"" Oct 30 00:03:43.570847 kubelet[2857]: E1030 00:03:43.570818 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:44.736583 kubelet[2857]: E1030 00:03:44.736504 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:45.603363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount811826210.mount: Deactivated successfully. Oct 30 00:03:46.738165 kubelet[2857]: E1030 00:03:46.737031 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:47.699550 containerd[1633]: time="2025-10-30T00:03:47.699448267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:47.702822 containerd[1633]: time="2025-10-30T00:03:47.702726985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Oct 30 00:03:47.705946 containerd[1633]: time="2025-10-30T00:03:47.704846675Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:47.713007 containerd[1633]: time="2025-10-30T00:03:47.712913656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:47.713922 containerd[1633]: time="2025-10-30T00:03:47.713867534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 4.331014033s" Oct 30 00:03:47.713922 containerd[1633]: time="2025-10-30T00:03:47.713907401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Oct 30 00:03:47.717850 containerd[1633]: time="2025-10-30T00:03:47.717580763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Oct 30 00:03:47.733361 containerd[1633]: time="2025-10-30T00:03:47.733300605Z" level=info msg="CreateContainer within sandbox \"d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Oct 30 00:03:47.746221 containerd[1633]: time="2025-10-30T00:03:47.746145149Z" level=info msg="Container 73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:47.750719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236918818.mount: Deactivated successfully. Oct 30 00:03:47.758065 containerd[1633]: time="2025-10-30T00:03:47.757154110Z" level=info msg="CreateContainer within sandbox \"d4d71a0aca069507d09878751a2d2a29f84079b1440d327ba5ee7504141c84a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9\"" Oct 30 00:03:47.758404 containerd[1633]: time="2025-10-30T00:03:47.758297989Z" level=info msg="StartContainer for \"73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9\"" Oct 30 00:03:47.759735 containerd[1633]: time="2025-10-30T00:03:47.759697830Z" level=info msg="connecting to shim 73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9" address="unix:///run/containerd/s/76afe0d6c6a9a0ed04d87d9865597a4dc829c71a14e0a75b81cce93a3c8e7082" protocol=ttrpc version=3 Oct 30 00:03:47.795084 systemd[1]: Started cri-containerd-73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9.scope - libcontainer container 73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9. Oct 30 00:03:47.855825 containerd[1633]: time="2025-10-30T00:03:47.855758765Z" level=info msg="StartContainer for \"73d8de461d259ff8f16d75a92740f3c39710f3dc4c775c5f8d4416b61a1a06b9\" returns successfully" Oct 30 00:03:48.736324 kubelet[2857]: E1030 00:03:48.736254 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:48.838908 kubelet[2857]: E1030 00:03:48.838394 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:48.854324 kubelet[2857]: I1030 00:03:48.854070 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c6bbf44c5-zxk29" podStartSLOduration=2.518606595 podStartE2EDuration="6.854045459s" podCreationTimestamp="2025-10-30 00:03:42 +0000 UTC" firstStartedPulling="2025-10-30 00:03:43.380887793 +0000 UTC m=+23.737397560" lastFinishedPulling="2025-10-30 00:03:47.716326657 +0000 UTC m=+28.072836424" observedRunningTime="2025-10-30 00:03:48.853532616 +0000 UTC m=+29.210042413" watchObservedRunningTime="2025-10-30 00:03:48.854045459 +0000 UTC m=+29.210555226" Oct 30 00:03:48.926963 kubelet[2857]: E1030 00:03:48.926900 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.926963 kubelet[2857]: W1030 00:03:48.926935 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.926963 kubelet[2857]: E1030 00:03:48.926962 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.927543 kubelet[2857]: E1030 00:03:48.927495 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.927543 kubelet[2857]: W1030 00:03:48.927514 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.927543 kubelet[2857]: E1030 00:03:48.927526 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.927914 kubelet[2857]: E1030 00:03:48.927891 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.927914 kubelet[2857]: W1030 00:03:48.927907 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.927914 kubelet[2857]: E1030 00:03:48.927918 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.928274 kubelet[2857]: E1030 00:03:48.928239 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.928274 kubelet[2857]: W1030 00:03:48.928256 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.928274 kubelet[2857]: E1030 00:03:48.928270 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.928662 kubelet[2857]: E1030 00:03:48.928598 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.928748 kubelet[2857]: W1030 00:03:48.928661 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.928748 kubelet[2857]: E1030 00:03:48.928710 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.929075 kubelet[2857]: E1030 00:03:48.929043 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.929075 kubelet[2857]: W1030 00:03:48.929059 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.929075 kubelet[2857]: E1030 00:03:48.929070 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.929336 kubelet[2857]: E1030 00:03:48.929318 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.929336 kubelet[2857]: W1030 00:03:48.929331 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.929439 kubelet[2857]: E1030 00:03:48.929341 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.929611 kubelet[2857]: E1030 00:03:48.929567 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.929611 kubelet[2857]: W1030 00:03:48.929592 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.929727 kubelet[2857]: E1030 00:03:48.929646 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.929974 kubelet[2857]: E1030 00:03:48.929949 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.929974 kubelet[2857]: W1030 00:03:48.929966 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.929974 kubelet[2857]: E1030 00:03:48.929977 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.931906 kubelet[2857]: E1030 00:03:48.931824 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.931906 kubelet[2857]: W1030 00:03:48.931848 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.931906 kubelet[2857]: E1030 00:03:48.931872 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.932139 kubelet[2857]: E1030 00:03:48.932120 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.932139 kubelet[2857]: W1030 00:03:48.932135 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.932239 kubelet[2857]: E1030 00:03:48.932147 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.933089 kubelet[2857]: E1030 00:03:48.933068 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.933089 kubelet[2857]: W1030 00:03:48.933083 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.933206 kubelet[2857]: E1030 00:03:48.933096 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.933633 kubelet[2857]: E1030 00:03:48.933587 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.933633 kubelet[2857]: W1030 00:03:48.933619 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.933633 kubelet[2857]: E1030 00:03:48.933633 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.933911 kubelet[2857]: E1030 00:03:48.933889 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.933911 kubelet[2857]: W1030 00:03:48.933902 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.933911 kubelet[2857]: E1030 00:03:48.933913 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.934279 kubelet[2857]: E1030 00:03:48.934262 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.934279 kubelet[2857]: W1030 00:03:48.934275 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.934355 kubelet[2857]: E1030 00:03:48.934287 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.944933 kubelet[2857]: E1030 00:03:48.944875 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.944933 kubelet[2857]: W1030 00:03:48.944904 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.944933 kubelet[2857]: E1030 00:03:48.944932 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.946108 kubelet[2857]: E1030 00:03:48.945769 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.946108 kubelet[2857]: W1030 00:03:48.945792 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.946108 kubelet[2857]: E1030 00:03:48.945804 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.946397 kubelet[2857]: E1030 00:03:48.946165 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.946397 kubelet[2857]: W1030 00:03:48.946177 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.946397 kubelet[2857]: E1030 00:03:48.946188 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.946504 kubelet[2857]: E1030 00:03:48.946481 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.946504 kubelet[2857]: W1030 00:03:48.946499 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.946575 kubelet[2857]: E1030 00:03:48.946513 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.946911 kubelet[2857]: E1030 00:03:48.946802 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.946911 kubelet[2857]: W1030 00:03:48.946822 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.946911 kubelet[2857]: E1030 00:03:48.946835 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.947056 kubelet[2857]: E1030 00:03:48.947036 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.947056 kubelet[2857]: W1030 00:03:48.947048 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.947132 kubelet[2857]: E1030 00:03:48.947058 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.947838 kubelet[2857]: E1030 00:03:48.947779 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.947838 kubelet[2857]: W1030 00:03:48.947817 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.947981 kubelet[2857]: E1030 00:03:48.947849 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.948288 kubelet[2857]: E1030 00:03:48.948264 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.948337 kubelet[2857]: W1030 00:03:48.948291 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.948337 kubelet[2857]: E1030 00:03:48.948305 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.948655 kubelet[2857]: E1030 00:03:48.948584 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.948655 kubelet[2857]: W1030 00:03:48.948645 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.948655 kubelet[2857]: E1030 00:03:48.948671 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.949200 kubelet[2857]: E1030 00:03:48.949180 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.949200 kubelet[2857]: W1030 00:03:48.949195 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.949277 kubelet[2857]: E1030 00:03:48.949209 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.949510 kubelet[2857]: E1030 00:03:48.949453 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.949510 kubelet[2857]: W1030 00:03:48.949468 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.949510 kubelet[2857]: E1030 00:03:48.949481 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.949825 kubelet[2857]: E1030 00:03:48.949804 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.949825 kubelet[2857]: W1030 00:03:48.949822 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.949959 kubelet[2857]: E1030 00:03:48.949843 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.950145 kubelet[2857]: E1030 00:03:48.950124 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.950145 kubelet[2857]: W1030 00:03:48.950137 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.950237 kubelet[2857]: E1030 00:03:48.950150 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.950389 kubelet[2857]: E1030 00:03:48.950369 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.950389 kubelet[2857]: W1030 00:03:48.950381 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.950472 kubelet[2857]: E1030 00:03:48.950393 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.950717 kubelet[2857]: E1030 00:03:48.950699 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.950717 kubelet[2857]: W1030 00:03:48.950714 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.950807 kubelet[2857]: E1030 00:03:48.950726 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.950964 kubelet[2857]: E1030 00:03:48.950949 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.950964 kubelet[2857]: W1030 00:03:48.950960 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.951037 kubelet[2857]: E1030 00:03:48.950971 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.951350 kubelet[2857]: E1030 00:03:48.951324 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.951350 kubelet[2857]: W1030 00:03:48.951344 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.951430 kubelet[2857]: E1030 00:03:48.951362 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:48.951700 kubelet[2857]: E1030 00:03:48.951669 2857 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Oct 30 00:03:48.951700 kubelet[2857]: W1030 00:03:48.951684 2857 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Oct 30 00:03:48.951813 kubelet[2857]: E1030 00:03:48.951707 2857 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Oct 30 00:03:49.530070 containerd[1633]: time="2025-10-30T00:03:49.529962553Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:49.531119 containerd[1633]: time="2025-10-30T00:03:49.531066218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Oct 30 00:03:49.532490 containerd[1633]: time="2025-10-30T00:03:49.532423971Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:49.535246 containerd[1633]: time="2025-10-30T00:03:49.535203299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:49.536271 containerd[1633]: time="2025-10-30T00:03:49.536237500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.818397169s" Oct 30 00:03:49.536316 containerd[1633]: time="2025-10-30T00:03:49.536273800Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Oct 30 00:03:49.540818 containerd[1633]: time="2025-10-30T00:03:49.540751056Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Oct 30 00:03:49.554617 containerd[1633]: time="2025-10-30T00:03:49.554535320Z" level=info msg="Container 02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:49.568943 containerd[1633]: time="2025-10-30T00:03:49.568869228Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\"" Oct 30 00:03:49.569633 containerd[1633]: time="2025-10-30T00:03:49.569572251Z" level=info msg="StartContainer for \"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\"" Oct 30 00:03:49.571197 containerd[1633]: time="2025-10-30T00:03:49.571168239Z" level=info msg="connecting to shim 02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27" address="unix:///run/containerd/s/e2bfd54f0cb0271eea66752af763f1c49c18e33837a5b9c572432850b5348a0e" protocol=ttrpc version=3 Oct 30 00:03:49.610035 systemd[1]: Started cri-containerd-02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27.scope - libcontainer container 02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27. Oct 30 00:03:49.681948 systemd[1]: cri-containerd-02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27.scope: Deactivated successfully. Oct 30 00:03:49.685332 containerd[1633]: time="2025-10-30T00:03:49.685273340Z" level=info msg="TaskExit event in podsandbox handler container_id:\"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\" id:\"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\" pid:3537 exited_at:{seconds:1761782629 nanos:684619914}" Oct 30 00:03:49.742958 containerd[1633]: time="2025-10-30T00:03:49.742891105Z" level=info msg="received exit event container_id:\"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\" id:\"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\" pid:3537 exited_at:{seconds:1761782629 nanos:684619914}" Oct 30 00:03:49.745301 containerd[1633]: time="2025-10-30T00:03:49.745266143Z" level=info msg="StartContainer for \"02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27\" returns successfully" Oct 30 00:03:49.770755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02cd099f118f93cbe6afcfd2d6ac8b75b860373891b9b511eb7c204a5cb36a27-rootfs.mount: Deactivated successfully. Oct 30 00:03:49.842836 kubelet[2857]: I1030 00:03:49.842671 2857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:03:49.843385 kubelet[2857]: E1030 00:03:49.842974 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:49.843385 kubelet[2857]: E1030 00:03:49.842974 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:50.736832 kubelet[2857]: E1030 00:03:50.736759 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:50.849524 kubelet[2857]: E1030 00:03:50.849131 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:50.850198 containerd[1633]: time="2025-10-30T00:03:50.849917270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Oct 30 00:03:51.247464 kubelet[2857]: I1030 00:03:51.247409 2857 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 30 00:03:51.248073 kubelet[2857]: E1030 00:03:51.248022 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:51.850966 kubelet[2857]: E1030 00:03:51.850899 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:52.736428 kubelet[2857]: E1030 00:03:52.736045 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:54.736177 kubelet[2857]: E1030 00:03:54.736089 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:54.773814 containerd[1633]: time="2025-10-30T00:03:54.773733975Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:54.918840 containerd[1633]: time="2025-10-30T00:03:54.918738965Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Oct 30 00:03:55.000793 containerd[1633]: time="2025-10-30T00:03:55.000715323Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:55.039363 containerd[1633]: time="2025-10-30T00:03:55.039282410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:03:55.040374 containerd[1633]: time="2025-10-30T00:03:55.040295287Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.190336897s" Oct 30 00:03:55.040374 containerd[1633]: time="2025-10-30T00:03:55.040364912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Oct 30 00:03:55.171990 containerd[1633]: time="2025-10-30T00:03:55.171906343Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Oct 30 00:03:55.409270 containerd[1633]: time="2025-10-30T00:03:55.409121481Z" level=info msg="Container cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:03:55.679544 containerd[1633]: time="2025-10-30T00:03:55.679360280Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\"" Oct 30 00:03:55.680122 containerd[1633]: time="2025-10-30T00:03:55.680052655Z" level=info msg="StartContainer for \"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\"" Oct 30 00:03:55.682248 containerd[1633]: time="2025-10-30T00:03:55.682212219Z" level=info msg="connecting to shim cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81" address="unix:///run/containerd/s/e2bfd54f0cb0271eea66752af763f1c49c18e33837a5b9c572432850b5348a0e" protocol=ttrpc version=3 Oct 30 00:03:55.709733 systemd[1]: Started cri-containerd-cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81.scope - libcontainer container cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81. Oct 30 00:03:56.730833 containerd[1633]: time="2025-10-30T00:03:56.730750802Z" level=info msg="StartContainer for \"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\" returns successfully" Oct 30 00:03:56.736970 kubelet[2857]: E1030 00:03:56.735982 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:57.737619 kubelet[2857]: E1030 00:03:57.737548 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:58.671884 containerd[1633]: time="2025-10-30T00:03:58.671728563Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 30 00:03:58.675144 systemd[1]: cri-containerd-cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81.scope: Deactivated successfully. Oct 30 00:03:58.675587 systemd[1]: cri-containerd-cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81.scope: Consumed 682ms CPU time, 181.7M memory peak, 12K read from disk, 171.3M written to disk. Oct 30 00:03:58.677371 containerd[1633]: time="2025-10-30T00:03:58.677322830Z" level=info msg="received exit event container_id:\"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\" id:\"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\" pid:3603 exited_at:{seconds:1761782638 nanos:676996859}" Oct 30 00:03:58.677488 containerd[1633]: time="2025-10-30T00:03:58.677462830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\" id:\"cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81\" pid:3603 exited_at:{seconds:1761782638 nanos:676996859}" Oct 30 00:03:58.702654 kubelet[2857]: I1030 00:03:58.701953 2857 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Oct 30 00:03:58.709484 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cec2ac09389259a48200fe2e2db53c466ad6e0fe5e34c8d004b49c332c267e81-rootfs.mount: Deactivated successfully. Oct 30 00:03:58.742765 kubelet[2857]: E1030 00:03:58.742712 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:58.745131 systemd[1]: Created slice kubepods-besteffort-poddac688c3_f50b_4d08_95db_f1aa2487f334.slice - libcontainer container kubepods-besteffort-poddac688c3_f50b_4d08_95db_f1aa2487f334.slice. Oct 30 00:03:59.442140 containerd[1633]: time="2025-10-30T00:03:59.442057199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:59.463076 systemd[1]: Created slice kubepods-besteffort-pod40ebe1bf_5ac2_4543_a7a3_8ade43c308c1.slice - libcontainer container kubepods-besteffort-pod40ebe1bf_5ac2_4543_a7a3_8ade43c308c1.slice. Oct 30 00:03:59.476091 systemd[1]: Created slice kubepods-burstable-pod54eb03ba_2fa6_4d86_891f_3330d4c9a1a2.slice - libcontainer container kubepods-burstable-pod54eb03ba_2fa6_4d86_891f_3330d4c9a1a2.slice. Oct 30 00:03:59.488937 systemd[1]: Created slice kubepods-besteffort-podf4d9bc01_6958_4087_977b_6989585a84eb.slice - libcontainer container kubepods-besteffort-podf4d9bc01_6958_4087_977b_6989585a84eb.slice. Oct 30 00:03:59.496880 systemd[1]: Created slice kubepods-besteffort-pod676e7fcb_c57d_4b5d_87fb_71a75d798467.slice - libcontainer container kubepods-besteffort-pod676e7fcb_c57d_4b5d_87fb_71a75d798467.slice. Oct 30 00:03:59.505475 systemd[1]: Created slice kubepods-besteffort-podc9ccbdc0_7a3b_420c_9200_91bd3b896e9d.slice - libcontainer container kubepods-besteffort-podc9ccbdc0_7a3b_420c_9200_91bd3b896e9d.slice. Oct 30 00:03:59.513007 systemd[1]: Created slice kubepods-besteffort-pod76c98e53_eb5b_4690_b648_f39ba68c3761.slice - libcontainer container kubepods-besteffort-pod76c98e53_eb5b_4690_b648_f39ba68c3761.slice. Oct 30 00:03:59.520078 kubelet[2857]: I1030 00:03:59.520039 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/676e7fcb-c57d-4b5d-87fb-71a75d798467-calico-apiserver-certs\") pod \"calico-apiserver-54f47b4cdd-ltgss\" (UID: \"676e7fcb-c57d-4b5d-87fb-71a75d798467\") " pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:03:59.520078 kubelet[2857]: I1030 00:03:59.520082 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tbc\" (UniqueName: \"kubernetes.io/projected/676e7fcb-c57d-4b5d-87fb-71a75d798467-kube-api-access-l5tbc\") pod \"calico-apiserver-54f47b4cdd-ltgss\" (UID: \"676e7fcb-c57d-4b5d-87fb-71a75d798467\") " pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:03:59.520078 kubelet[2857]: I1030 00:03:59.520106 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-backend-key-pair\") pod \"whisker-67d895cdb6-zhrr4\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:03:59.520078 kubelet[2857]: I1030 00:03:59.520143 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f4d9bc01-6958-4087-977b-6989585a84eb-calico-apiserver-certs\") pod \"calico-apiserver-54f47b4cdd-94wcm\" (UID: \"f4d9bc01-6958-4087-977b-6989585a84eb\") " pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:03:59.520078 kubelet[2857]: I1030 00:03:59.520166 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76c98e53-eb5b-4690-b648-f39ba68c3761-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-7dwtp\" (UID: \"76c98e53-eb5b-4690-b648-f39ba68c3761\") " pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:03:59.520817 kubelet[2857]: I1030 00:03:59.520188 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tl6r\" (UniqueName: \"kubernetes.io/projected/54eb03ba-2fa6-4d86-891f-3330d4c9a1a2-kube-api-access-4tl6r\") pod \"coredns-66bc5c9577-666sp\" (UID: \"54eb03ba-2fa6-4d86-891f-3330d4c9a1a2\") " pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:03:59.520817 kubelet[2857]: I1030 00:03:59.520207 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9ccbdc0-7a3b-420c-9200-91bd3b896e9d-tigera-ca-bundle\") pod \"calico-kube-controllers-778756694d-5pt4t\" (UID: \"c9ccbdc0-7a3b-420c-9200-91bd3b896e9d\") " pod="calico-system/calico-kube-controllers-778756694d-5pt4t" Oct 30 00:03:59.520817 kubelet[2857]: I1030 00:03:59.520243 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76c98e53-eb5b-4690-b648-f39ba68c3761-config\") pod \"goldmane-7c778bb748-7dwtp\" (UID: \"76c98e53-eb5b-4690-b648-f39ba68c3761\") " pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:03:59.520817 kubelet[2857]: I1030 00:03:59.520264 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpdsp\" (UniqueName: \"kubernetes.io/projected/c9ccbdc0-7a3b-420c-9200-91bd3b896e9d-kube-api-access-wpdsp\") pod \"calico-kube-controllers-778756694d-5pt4t\" (UID: \"c9ccbdc0-7a3b-420c-9200-91bd3b896e9d\") " pod="calico-system/calico-kube-controllers-778756694d-5pt4t" Oct 30 00:03:59.520817 kubelet[2857]: I1030 00:03:59.520288 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-ca-bundle\") pod \"whisker-67d895cdb6-zhrr4\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:03:59.520968 kubelet[2857]: I1030 00:03:59.520306 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d8696ee-1cc2-47c7-9396-66f6dddcfb7a-config-volume\") pod \"coredns-66bc5c9577-kmqpz\" (UID: \"6d8696ee-1cc2-47c7-9396-66f6dddcfb7a\") " pod="kube-system/coredns-66bc5c9577-kmqpz" Oct 30 00:03:59.520968 kubelet[2857]: I1030 00:03:59.520328 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjgw\" (UniqueName: \"kubernetes.io/projected/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-kube-api-access-4bjgw\") pod \"whisker-67d895cdb6-zhrr4\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:03:59.520968 kubelet[2857]: I1030 00:03:59.520351 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjmk4\" (UniqueName: \"kubernetes.io/projected/f4d9bc01-6958-4087-977b-6989585a84eb-kube-api-access-hjmk4\") pod \"calico-apiserver-54f47b4cdd-94wcm\" (UID: \"f4d9bc01-6958-4087-977b-6989585a84eb\") " pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:03:59.520968 kubelet[2857]: I1030 00:03:59.520370 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/76c98e53-eb5b-4690-b648-f39ba68c3761-goldmane-key-pair\") pod \"goldmane-7c778bb748-7dwtp\" (UID: \"76c98e53-eb5b-4690-b648-f39ba68c3761\") " pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:03:59.520968 kubelet[2857]: I1030 00:03:59.520392 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djr95\" (UniqueName: \"kubernetes.io/projected/76c98e53-eb5b-4690-b648-f39ba68c3761-kube-api-access-djr95\") pod \"goldmane-7c778bb748-7dwtp\" (UID: \"76c98e53-eb5b-4690-b648-f39ba68c3761\") " pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:03:59.521134 kubelet[2857]: I1030 00:03:59.520421 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtsl\" (UniqueName: \"kubernetes.io/projected/6d8696ee-1cc2-47c7-9396-66f6dddcfb7a-kube-api-access-bhtsl\") pod \"coredns-66bc5c9577-kmqpz\" (UID: \"6d8696ee-1cc2-47c7-9396-66f6dddcfb7a\") " pod="kube-system/coredns-66bc5c9577-kmqpz" Oct 30 00:03:59.521134 kubelet[2857]: I1030 00:03:59.520441 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/54eb03ba-2fa6-4d86-891f-3330d4c9a1a2-config-volume\") pod \"coredns-66bc5c9577-666sp\" (UID: \"54eb03ba-2fa6-4d86-891f-3330d4c9a1a2\") " pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:03:59.524389 systemd[1]: Created slice kubepods-burstable-pod6d8696ee_1cc2_47c7_9396_66f6dddcfb7a.slice - libcontainer container kubepods-burstable-pod6d8696ee_1cc2_47c7_9396_66f6dddcfb7a.slice. Oct 30 00:03:59.561240 containerd[1633]: time="2025-10-30T00:03:59.561166752Z" level=error msg="Failed to destroy network for sandbox \"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.563321 containerd[1633]: time="2025-10-30T00:03:59.563275804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.563732 kubelet[2857]: E1030 00:03:59.563658 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.563817 kubelet[2857]: E1030 00:03:59.563761 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:59.563817 kubelet[2857]: E1030 00:03:59.563785 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:03:59.563892 kubelet[2857]: E1030 00:03:59.563859 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed954690e75cd86d607e05805439ae761b4e21edf08add87ddb5eaf9cbe9e0fb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:03:59.564089 systemd[1]: run-netns-cni\x2d68a7a4bd\x2db28c\x2d265b\x2db3c8\x2da4a40e70bc49.mount: Deactivated successfully. Oct 30 00:03:59.748784 kubelet[2857]: E1030 00:03:59.748729 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:59.749742 containerd[1633]: time="2025-10-30T00:03:59.749702654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Oct 30 00:03:59.776638 containerd[1633]: time="2025-10-30T00:03:59.776555278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d895cdb6-zhrr4,Uid:40ebe1bf-5ac2-4543-a7a3-8ade43c308c1,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:59.783879 kubelet[2857]: E1030 00:03:59.783833 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:59.786162 containerd[1633]: time="2025-10-30T00:03:59.786101175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:59.796235 containerd[1633]: time="2025-10-30T00:03:59.796177769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:03:59.812336 containerd[1633]: time="2025-10-30T00:03:59.812249376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:03:59.815488 containerd[1633]: time="2025-10-30T00:03:59.815439470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778756694d-5pt4t,Uid:c9ccbdc0-7a3b-420c-9200-91bd3b896e9d,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:59.822947 containerd[1633]: time="2025-10-30T00:03:59.822844193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7dwtp,Uid:76c98e53-eb5b-4690-b648-f39ba68c3761,Namespace:calico-system,Attempt:0,}" Oct 30 00:03:59.836025 kubelet[2857]: E1030 00:03:59.835969 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:03:59.846997 containerd[1633]: time="2025-10-30T00:03:59.846655751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmqpz,Uid:6d8696ee-1cc2-47c7-9396-66f6dddcfb7a,Namespace:kube-system,Attempt:0,}" Oct 30 00:03:59.919189 containerd[1633]: time="2025-10-30T00:03:59.919119789Z" level=error msg="Failed to destroy network for sandbox \"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.920908 containerd[1633]: time="2025-10-30T00:03:59.920837174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.921212 kubelet[2857]: E1030 00:03:59.921164 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.921284 kubelet[2857]: E1030 00:03:59.921244 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:03:59.921323 kubelet[2857]: E1030 00:03:59.921283 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:03:59.921386 kubelet[2857]: E1030 00:03:59.921342 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-666sp_kube-system(54eb03ba-2fa6-4d86-891f-3330d4c9a1a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-666sp_kube-system(54eb03ba-2fa6-4d86-891f-3330d4c9a1a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"701c9ac7b0a936c89df79355b0a3d1cd0be29ef3485a4e1f3ebbd940ac6d4d90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-666sp" podUID="54eb03ba-2fa6-4d86-891f-3330d4c9a1a2" Oct 30 00:03:59.936396 containerd[1633]: time="2025-10-30T00:03:59.936250877Z" level=error msg="Failed to destroy network for sandbox \"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.938124 containerd[1633]: time="2025-10-30T00:03:59.938092051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d895cdb6-zhrr4,Uid:40ebe1bf-5ac2-4543-a7a3-8ade43c308c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.938545 kubelet[2857]: E1030 00:03:59.938485 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.938693 kubelet[2857]: E1030 00:03:59.938559 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:03:59.938761 kubelet[2857]: E1030 00:03:59.938704 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:03:59.939697 kubelet[2857]: E1030 00:03:59.938797 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67d895cdb6-zhrr4_calico-system(40ebe1bf-5ac2-4543-a7a3-8ade43c308c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67d895cdb6-zhrr4_calico-system(40ebe1bf-5ac2-4543-a7a3-8ade43c308c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"804e5a1960685522b7d004d33a724a7e0487dc91bc5935ce51d3186db1b9e8ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d895cdb6-zhrr4" podUID="40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" Oct 30 00:03:59.946988 containerd[1633]: time="2025-10-30T00:03:59.946842788Z" level=error msg="Failed to destroy network for sandbox \"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.948080 containerd[1633]: time="2025-10-30T00:03:59.948001832Z" level=error msg="Failed to destroy network for sandbox \"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.949515 containerd[1633]: time="2025-10-30T00:03:59.949375050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.950225 kubelet[2857]: E1030 00:03:59.949803 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.950225 kubelet[2857]: E1030 00:03:59.949885 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:03:59.950225 kubelet[2857]: E1030 00:03:59.949910 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:03:59.950956 containerd[1633]: time="2025-10-30T00:03:59.950924809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.951200 kubelet[2857]: E1030 00:03:59.949998 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ae7460399b84533dced047190ccb1ee0ebaf41754745bb62922ea28d1d8771c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:03:59.951439 kubelet[2857]: E1030 00:03:59.951402 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.951557 kubelet[2857]: E1030 00:03:59.951541 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:03:59.951703 kubelet[2857]: E1030 00:03:59.951684 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:03:59.951927 kubelet[2857]: E1030 00:03:59.951845 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b14fb9f4e7ea01983ceed45cf63f670c1f227ff4d1aecf52a074aa197bd5ed3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:03:59.975046 containerd[1633]: time="2025-10-30T00:03:59.974916527Z" level=error msg="Failed to destroy network for sandbox \"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.977097 containerd[1633]: time="2025-10-30T00:03:59.977029427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778756694d-5pt4t,Uid:c9ccbdc0-7a3b-420c-9200-91bd3b896e9d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.977709 kubelet[2857]: E1030 00:03:59.977658 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.977782 kubelet[2857]: E1030 00:03:59.977747 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" Oct 30 00:03:59.977782 kubelet[2857]: E1030 00:03:59.977775 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" Oct 30 00:03:59.977877 kubelet[2857]: E1030 00:03:59.977838 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-778756694d-5pt4t_calico-system(c9ccbdc0-7a3b-420c-9200-91bd3b896e9d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-778756694d-5pt4t_calico-system(c9ccbdc0-7a3b-420c-9200-91bd3b896e9d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f07892e35b83b8f2d5f740d3bcfe68733d91b5a209c1dfbd66af0f0712652a1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:03:59.986265 containerd[1633]: time="2025-10-30T00:03:59.986194025Z" level=error msg="Failed to destroy network for sandbox \"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.987841 containerd[1633]: time="2025-10-30T00:03:59.987775546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmqpz,Uid:6d8696ee-1cc2-47c7-9396-66f6dddcfb7a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.988279 kubelet[2857]: E1030 00:03:59.988062 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:03:59.988279 kubelet[2857]: E1030 00:03:59.988125 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kmqpz" Oct 30 00:03:59.988279 kubelet[2857]: E1030 00:03:59.988155 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-kmqpz" Oct 30 00:03:59.988402 kubelet[2857]: E1030 00:03:59.988223 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-kmqpz_kube-system(6d8696ee-1cc2-47c7-9396-66f6dddcfb7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-kmqpz_kube-system(6d8696ee-1cc2-47c7-9396-66f6dddcfb7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac11b390436032189a8cba4ca71a0e4187f4f1f93a8502215af2fe34889868e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-kmqpz" podUID="6d8696ee-1cc2-47c7-9396-66f6dddcfb7a" Oct 30 00:04:00.002063 containerd[1633]: time="2025-10-30T00:04:00.001867962Z" level=error msg="Failed to destroy network for sandbox \"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:00.003593 containerd[1633]: time="2025-10-30T00:04:00.003538734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7dwtp,Uid:76c98e53-eb5b-4690-b648-f39ba68c3761,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:00.003901 kubelet[2857]: E1030 00:04:00.003851 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:00.003983 kubelet[2857]: E1030 00:04:00.003931 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:04:00.003983 kubelet[2857]: E1030 00:04:00.003960 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-7dwtp" Oct 30 00:04:00.004064 kubelet[2857]: E1030 00:04:00.004030 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-7dwtp_calico-system(76c98e53-eb5b-4690-b648-f39ba68c3761)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-7dwtp_calico-system(76c98e53-eb5b-4690-b648-f39ba68c3761)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7ec395794772c5ba6bbcd9fbaf3632d51a66f26d3b85fc844a443bfa719b3ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:04:00.709322 systemd[1]: run-netns-cni\x2d641d8618\x2d67cd\x2de02b\x2d300e\x2d39e72629a5a4.mount: Deactivated successfully. Oct 30 00:04:00.709451 systemd[1]: run-netns-cni\x2dbbdb7a4e\x2d7ea7\x2d28a5\x2d1406\x2d9357426940f2.mount: Deactivated successfully. Oct 30 00:04:00.709542 systemd[1]: run-netns-cni\x2dfab85379\x2d4789\x2db760\x2df0ac\x2d61cf1b82afda.mount: Deactivated successfully. Oct 30 00:04:00.709646 systemd[1]: run-netns-cni\x2d416ebf6c\x2d5ff3\x2d7060\x2d3242\x2d1ea3f0f278ab.mount: Deactivated successfully. Oct 30 00:04:06.418881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2656139792.mount: Deactivated successfully. Oct 30 00:04:10.540336 containerd[1633]: time="2025-10-30T00:04:10.539860881Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.652583 containerd[1633]: time="2025-10-30T00:04:10.652486448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Oct 30 00:04:10.717119 containerd[1633]: time="2025-10-30T00:04:10.717002849Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.814830 containerd[1633]: time="2025-10-30T00:04:10.814267270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 30 00:04:10.815092 containerd[1633]: time="2025-10-30T00:04:10.815054332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d895cdb6-zhrr4,Uid:40ebe1bf-5ac2-4543-a7a3-8ade43c308c1,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:10.815180 containerd[1633]: time="2025-10-30T00:04:10.815066485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.06531092s" Oct 30 00:04:10.815180 containerd[1633]: time="2025-10-30T00:04:10.815115479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Oct 30 00:04:10.963502 containerd[1633]: time="2025-10-30T00:04:10.963419273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:11.051252 containerd[1633]: time="2025-10-30T00:04:11.051159949Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Oct 30 00:04:11.267207 containerd[1633]: time="2025-10-30T00:04:11.267139030Z" level=error msg="Failed to destroy network for sandbox \"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.269661 systemd[1]: run-netns-cni\x2d29bfb1a8\x2dd68a\x2d63b5\x2d8855\x2dbb91f28c8a8a.mount: Deactivated successfully. Oct 30 00:04:11.372257 containerd[1633]: time="2025-10-30T00:04:11.372196364Z" level=error msg="Failed to destroy network for sandbox \"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.473556 containerd[1633]: time="2025-10-30T00:04:11.473311066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-67d895cdb6-zhrr4,Uid:40ebe1bf-5ac2-4543-a7a3-8ade43c308c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.473843 kubelet[2857]: E1030 00:04:11.473761 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.474280 kubelet[2857]: E1030 00:04:11.473886 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:04:11.474280 kubelet[2857]: E1030 00:04:11.473920 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-67d895cdb6-zhrr4" Oct 30 00:04:11.474280 kubelet[2857]: E1030 00:04:11.474023 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-67d895cdb6-zhrr4_calico-system(40ebe1bf-5ac2-4543-a7a3-8ade43c308c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-67d895cdb6-zhrr4_calico-system(40ebe1bf-5ac2-4543-a7a3-8ade43c308c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"303741e9aaf903b7bbfe973e03c158d437caf417205ac9599868eaa9b058d3d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-67d895cdb6-zhrr4" podUID="40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" Oct 30 00:04:11.586929 containerd[1633]: time="2025-10-30T00:04:11.586745417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.587663 kubelet[2857]: E1030 00:04:11.587084 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:11.587663 kubelet[2857]: E1030 00:04:11.587163 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:04:11.587663 kubelet[2857]: E1030 00:04:11.587198 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bmzt2" Oct 30 00:04:11.587802 kubelet[2857]: E1030 00:04:11.587263 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6c7212b48d308a0fb43cef95b29922d5fef2514628c8801cc93cb0891be44096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:04:11.822575 systemd[1]: run-netns-cni\x2df5fe3559\x2d632c\x2d528d\x2db4d1\x2d93abad5d8f48.mount: Deactivated successfully. Oct 30 00:04:12.980857 containerd[1633]: time="2025-10-30T00:04:12.980778730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:04:13.014876 containerd[1633]: time="2025-10-30T00:04:13.014819057Z" level=info msg="Container 6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:13.035997 containerd[1633]: time="2025-10-30T00:04:13.035926873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:04:13.051655 kubelet[2857]: E1030 00:04:13.050961 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:13.052323 containerd[1633]: time="2025-10-30T00:04:13.051559153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:13.131419 containerd[1633]: time="2025-10-30T00:04:13.131362267Z" level=info msg="CreateContainer within sandbox \"7ec8a0b8220920cd2d988d75af1b9c95873b29aede77a29a7e0b4bad3185047c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\"" Oct 30 00:04:13.134623 containerd[1633]: time="2025-10-30T00:04:13.133850250Z" level=info msg="StartContainer for \"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\"" Oct 30 00:04:13.140849 containerd[1633]: time="2025-10-30T00:04:13.140740058Z" level=info msg="connecting to shim 6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26" address="unix:///run/containerd/s/e2bfd54f0cb0271eea66752af763f1c49c18e33837a5b9c572432850b5348a0e" protocol=ttrpc version=3 Oct 30 00:04:13.191533 systemd[1]: Started cri-containerd-6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26.scope - libcontainer container 6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26. Oct 30 00:04:13.224630 containerd[1633]: time="2025-10-30T00:04:13.224324627Z" level=error msg="Failed to destroy network for sandbox \"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.228845 containerd[1633]: time="2025-10-30T00:04:13.228139115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.229059 kubelet[2857]: E1030 00:04:13.228982 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.231856 kubelet[2857]: E1030 00:04:13.231695 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:04:13.231856 kubelet[2857]: E1030 00:04:13.231776 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" Oct 30 00:04:13.232104 kubelet[2857]: E1030 00:04:13.232015 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b03ad588f64b5a1d6fb893ec5cb29cd57ab90ac9c14366b8f3b4b85b87d3d48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:04:13.296865 containerd[1633]: time="2025-10-30T00:04:13.259512225Z" level=error msg="Failed to destroy network for sandbox \"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.297096 containerd[1633]: time="2025-10-30T00:04:13.266814424Z" level=error msg="Failed to destroy network for sandbox \"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.427171 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Oct 30 00:04:13.428163 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Oct 30 00:04:13.477231 containerd[1633]: time="2025-10-30T00:04:13.477110551Z" level=info msg="StartContainer for \"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\" returns successfully" Oct 30 00:04:13.496766 containerd[1633]: time="2025-10-30T00:04:13.496653524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.497465 kubelet[2857]: E1030 00:04:13.497362 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.497551 kubelet[2857]: E1030 00:04:13.497481 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:04:13.497551 kubelet[2857]: E1030 00:04:13.497510 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" Oct 30 00:04:13.497703 kubelet[2857]: E1030 00:04:13.497593 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f480e0dd28925915aa5bfdb37295359d2d076ad3851039709d25b1d93060076e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:13.531007 containerd[1633]: time="2025-10-30T00:04:13.530910466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.531294 kubelet[2857]: E1030 00:04:13.531246 2857 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Oct 30 00:04:13.531365 kubelet[2857]: E1030 00:04:13.531318 2857 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:04:13.531365 kubelet[2857]: E1030 00:04:13.531341 2857 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-666sp" Oct 30 00:04:13.531463 kubelet[2857]: E1030 00:04:13.531398 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-666sp_kube-system(54eb03ba-2fa6-4d86-891f-3330d4c9a1a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-666sp_kube-system(54eb03ba-2fa6-4d86-891f-3330d4c9a1a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f287c294bcd802d13bb8b3ecb03551f968968b15c5dd0342d717da0de1b231c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-666sp" podUID="54eb03ba-2fa6-4d86-891f-3330d4c9a1a2" Oct 30 00:04:13.629922 kubelet[2857]: E1030 00:04:13.629815 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:13.783911 containerd[1633]: time="2025-10-30T00:04:13.783754502Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\" id:\"1f2a877ad5113f8924d9cf761a145231ec5097e77aaeb2d54e3c26fb66f48262\" pid:4140 exit_status:1 exited_at:{seconds:1761782653 nanos:783296683}" Oct 30 00:04:14.014657 systemd[1]: run-netns-cni\x2d60fd346e\x2dede5\x2d78ef\x2dc2cc\x2daa8d8197d383.mount: Deactivated successfully. Oct 30 00:04:14.014785 systemd[1]: run-netns-cni\x2dc40db981\x2dc576\x2dc71e\x2d0f61\x2d77ff3c4a6ca1.mount: Deactivated successfully. Oct 30 00:04:14.014862 systemd[1]: run-netns-cni\x2d2731f769\x2d78cb\x2da0dd\x2d97b5\x2dfdf1d1a33e90.mount: Deactivated successfully. Oct 30 00:04:14.242101 kubelet[2857]: I1030 00:04:14.241369 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dxnbl" podStartSLOduration=3.996692623 podStartE2EDuration="31.24133114s" podCreationTimestamp="2025-10-30 00:03:43 +0000 UTC" firstStartedPulling="2025-10-30 00:03:43.57135085 +0000 UTC m=+23.927860618" lastFinishedPulling="2025-10-30 00:04:10.815989368 +0000 UTC m=+51.172499135" observedRunningTime="2025-10-30 00:04:14.08353108 +0000 UTC m=+54.440040877" watchObservedRunningTime="2025-10-30 00:04:14.24133114 +0000 UTC m=+54.597840928" Oct 30 00:04:14.330740 kubelet[2857]: I1030 00:04:14.330593 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-ca-bundle\") pod \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " Oct 30 00:04:14.331309 kubelet[2857]: I1030 00:04:14.331268 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" (UID: "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 30 00:04:14.331693 kubelet[2857]: I1030 00:04:14.331662 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bjgw\" (UniqueName: \"kubernetes.io/projected/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-kube-api-access-4bjgw\") pod \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " Oct 30 00:04:14.331743 kubelet[2857]: I1030 00:04:14.331697 2857 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-backend-key-pair\") pod \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\" (UID: \"40ebe1bf-5ac2-4543-a7a3-8ade43c308c1\") " Oct 30 00:04:14.331789 kubelet[2857]: I1030 00:04:14.331777 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Oct 30 00:04:14.337622 kubelet[2857]: I1030 00:04:14.337535 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-kube-api-access-4bjgw" (OuterVolumeSpecName: "kube-api-access-4bjgw") pod "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" (UID: "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1"). InnerVolumeSpecName "kube-api-access-4bjgw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 30 00:04:14.339797 kubelet[2857]: I1030 00:04:14.339759 2857 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" (UID: "40ebe1bf-5ac2-4543-a7a3-8ade43c308c1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 30 00:04:14.340560 systemd[1]: var-lib-kubelet-pods-40ebe1bf\x2d5ac2\x2d4543\x2da7a3\x2d8ade43c308c1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4bjgw.mount: Deactivated successfully. Oct 30 00:04:14.340757 systemd[1]: var-lib-kubelet-pods-40ebe1bf\x2d5ac2\x2d4543\x2da7a3\x2d8ade43c308c1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Oct 30 00:04:14.432131 kubelet[2857]: I1030 00:04:14.432045 2857 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4bjgw\" (UniqueName: \"kubernetes.io/projected/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-kube-api-access-4bjgw\") on node \"localhost\" DevicePath \"\"" Oct 30 00:04:14.432131 kubelet[2857]: I1030 00:04:14.432095 2857 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Oct 30 00:04:14.634388 kubelet[2857]: E1030 00:04:14.634343 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:14.639392 systemd[1]: Removed slice kubepods-besteffort-pod40ebe1bf_5ac2_4543_a7a3_8ade43c308c1.slice - libcontainer container kubepods-besteffort-pod40ebe1bf_5ac2_4543_a7a3_8ade43c308c1.slice. Oct 30 00:04:14.758620 containerd[1633]: time="2025-10-30T00:04:14.758551635Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\" id:\"36035d031eb848750924ee35c2cf621e54f3c4a42b5e88227efe967a964def54\" pid:4174 exit_status:1 exited_at:{seconds:1761782654 nanos:758141649}" Oct 30 00:04:14.831383 containerd[1633]: time="2025-10-30T00:04:14.829229394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778756694d-5pt4t,Uid:c9ccbdc0-7a3b-420c-9200-91bd3b896e9d,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:15.015402 kubelet[2857]: E1030 00:04:15.015061 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:15.017395 containerd[1633]: time="2025-10-30T00:04:15.017112126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmqpz,Uid:6d8696ee-1cc2-47c7-9396-66f6dddcfb7a,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:15.141108 containerd[1633]: time="2025-10-30T00:04:15.141054920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7dwtp,Uid:76c98e53-eb5b-4690-b648-f39ba68c3761,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:15.739478 kubelet[2857]: I1030 00:04:15.739406 2857 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40ebe1bf-5ac2-4543-a7a3-8ade43c308c1" path="/var/lib/kubelet/pods/40ebe1bf-5ac2-4543-a7a3-8ade43c308c1/volumes" Oct 30 00:04:15.897440 systemd[1]: Created slice kubepods-besteffort-podc1be5870_aa7d_44d6_8228_72dd5ed8c5f5.slice - libcontainer container kubepods-besteffort-podc1be5870_aa7d_44d6_8228_72dd5ed8c5f5.slice. Oct 30 00:04:15.943630 kubelet[2857]: I1030 00:04:15.943543 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1be5870-aa7d-44d6-8228-72dd5ed8c5f5-whisker-backend-key-pair\") pod \"whisker-85cf958ddd-dqv9z\" (UID: \"c1be5870-aa7d-44d6-8228-72dd5ed8c5f5\") " pod="calico-system/whisker-85cf958ddd-dqv9z" Oct 30 00:04:15.943630 kubelet[2857]: I1030 00:04:15.943633 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1be5870-aa7d-44d6-8228-72dd5ed8c5f5-whisker-ca-bundle\") pod \"whisker-85cf958ddd-dqv9z\" (UID: \"c1be5870-aa7d-44d6-8228-72dd5ed8c5f5\") " pod="calico-system/whisker-85cf958ddd-dqv9z" Oct 30 00:04:15.943841 kubelet[2857]: I1030 00:04:15.943667 2857 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89vmd\" (UniqueName: \"kubernetes.io/projected/c1be5870-aa7d-44d6-8228-72dd5ed8c5f5-kube-api-access-89vmd\") pod \"whisker-85cf958ddd-dqv9z\" (UID: \"c1be5870-aa7d-44d6-8228-72dd5ed8c5f5\") " pod="calico-system/whisker-85cf958ddd-dqv9z" Oct 30 00:04:16.313796 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:37250.service - OpenSSH per-connection server daemon (10.0.0.1:37250). Oct 30 00:04:16.362179 containerd[1633]: time="2025-10-30T00:04:16.362103972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cf958ddd-dqv9z,Uid:c1be5870-aa7d-44d6-8228-72dd5ed8c5f5,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:16.579877 sshd[4280]: Accepted publickey for core from 10.0.0.1 port 37250 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:16.582091 sshd-session[4280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:16.589287 systemd-logind[1613]: New session 10 of user core. Oct 30 00:04:16.595811 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 30 00:04:16.769341 systemd-networkd[1516]: calie5e1b2cdb5f: Link UP Oct 30 00:04:16.770680 systemd-networkd[1516]: calie5e1b2cdb5f: Gained carrier Oct 30 00:04:16.864151 containerd[1633]: 2025-10-30 00:04:15.309 [INFO][4213] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:04:16.864151 containerd[1633]: 2025-10-30 00:04:15.710 [INFO][4213] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--kmqpz-eth0 coredns-66bc5c9577- kube-system 6d8696ee-1cc2-47c7-9396-66f6dddcfb7a 861 0 2025-10-30 00:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-kmqpz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie5e1b2cdb5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-" Oct 30 00:04:16.864151 containerd[1633]: 2025-10-30 00:04:15.711 [INFO][4213] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.864151 containerd[1633]: 2025-10-30 00:04:16.162 [INFO][4243] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" HandleID="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Workload="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4243] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" HandleID="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Workload="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-kmqpz", "timestamp":"2025-10-30 00:04:16.162785789 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4243] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4243] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.164 [INFO][4243] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.357 [INFO][4243] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" host="localhost" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.390 [INFO][4243] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.442 [INFO][4243] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.464 [INFO][4243] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.491 [INFO][4243] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.864483 containerd[1633]: 2025-10-30 00:04:16.491 [INFO][4243] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" host="localhost" Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.606 [INFO][4243] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.630 [INFO][4243] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" host="localhost" Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4243] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" host="localhost" Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4243] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" host="localhost" Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4243] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:16.865392 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4243] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" HandleID="k8s-pod-network.2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Workload="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.738 [INFO][4213] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kmqpz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6d8696ee-1cc2-47c7-9396-66f6dddcfb7a", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-kmqpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5e1b2cdb5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.738 [INFO][4213] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.738 [INFO][4213] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5e1b2cdb5f ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.774 [INFO][4213] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.775 [INFO][4213] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--kmqpz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"6d8696ee-1cc2-47c7-9396-66f6dddcfb7a", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef", Pod:"coredns-66bc5c9577-kmqpz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie5e1b2cdb5f", MAC:"a2:56:a2:95:14:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.865640 containerd[1633]: 2025-10-30 00:04:16.856 [INFO][4213] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" Namespace="kube-system" Pod="coredns-66bc5c9577-kmqpz" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--kmqpz-eth0" Oct 30 00:04:16.880423 sshd[4390]: Connection closed by 10.0.0.1 port 37250 Oct 30 00:04:16.882253 sshd-session[4280]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:16.888800 systemd-networkd[1516]: cali4aed3629613: Link UP Oct 30 00:04:16.890763 systemd-networkd[1516]: cali4aed3629613: Gained carrier Oct 30 00:04:16.895272 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:37250.service: Deactivated successfully. Oct 30 00:04:16.902382 systemd[1]: session-10.scope: Deactivated successfully. Oct 30 00:04:16.906280 systemd-logind[1613]: Session 10 logged out. Waiting for processes to exit. Oct 30 00:04:16.912761 systemd-logind[1613]: Removed session 10. Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:15.386 [INFO][4227] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:15.710 [INFO][4227] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--7dwtp-eth0 goldmane-7c778bb748- calico-system 76c98e53-eb5b-4690-b648-f39ba68c3761 863 0 2025-10-30 00:03:40 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-7dwtp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4aed3629613 [] [] }} ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:15.711 [INFO][4227] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.162 [INFO][4241] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" HandleID="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Workload="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4241] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" HandleID="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Workload="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00018c7e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-7dwtp", "timestamp":"2025-10-30 00:04:16.162773635 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.730 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.741 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.749 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.758 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.763 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.767 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.767 [INFO][4241] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.773 [INFO][4241] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1 Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.788 [INFO][4241] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.861 [INFO][4241] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.861 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" host="localhost" Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.861 [INFO][4241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:16.922444 containerd[1633]: 2025-10-30 00:04:16.861 [INFO][4241] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" HandleID="k8s-pod-network.a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Workload="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.865 [INFO][4227] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--7dwtp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"76c98e53-eb5b-4690-b648-f39ba68c3761", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-7dwtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed3629613", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.865 [INFO][4227] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.865 [INFO][4227] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aed3629613 ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.889 [INFO][4227] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.889 [INFO][4227] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--7dwtp-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"76c98e53-eb5b-4690-b648-f39ba68c3761", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1", Pod:"goldmane-7c778bb748-7dwtp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4aed3629613", MAC:"76:5b:5f:2d:2b:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.923381 containerd[1633]: 2025-10-30 00:04:16.918 [INFO][4227] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" Namespace="calico-system" Pod="goldmane-7c778bb748-7dwtp" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--7dwtp-eth0" Oct 30 00:04:16.960112 systemd-networkd[1516]: cali8ff78795dd0: Link UP Oct 30 00:04:16.960801 systemd-networkd[1516]: cali8ff78795dd0: Gained carrier Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:15.208 [INFO][4187] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:15.710 [INFO][4187] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0 calico-kube-controllers-778756694d- calico-system c9ccbdc0-7a3b-420c-9200-91bd3b896e9d 860 0 2025-10-30 00:03:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:778756694d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-778756694d-5pt4t eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8ff78795dd0 [] [] }} ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:15.711 [INFO][4187] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4245] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" HandleID="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Workload="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4245] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" HandleID="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Workload="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000b1790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-778756694d-5pt4t", "timestamp":"2025-10-30 00:04:16.163478135 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.163 [INFO][4245] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.862 [INFO][4245] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.862 [INFO][4245] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.875 [INFO][4245] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.895 [INFO][4245] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.906 [INFO][4245] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.911 [INFO][4245] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.916 [INFO][4245] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.916 [INFO][4245] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.919 [INFO][4245] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327 Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.928 [INFO][4245] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.946 [INFO][4245] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.946 [INFO][4245] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" host="localhost" Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.948 [INFO][4245] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:16.986731 containerd[1633]: 2025-10-30 00:04:16.948 [INFO][4245] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" HandleID="k8s-pod-network.619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Workload="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.956 [INFO][4187] cni-plugin/k8s.go 418: Populated endpoint ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0", GenerateName:"calico-kube-controllers-778756694d-", Namespace:"calico-system", SelfLink:"", UID:"c9ccbdc0-7a3b-420c-9200-91bd3b896e9d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778756694d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-778756694d-5pt4t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ff78795dd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.956 [INFO][4187] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.956 [INFO][4187] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ff78795dd0 ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.960 [INFO][4187] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.961 [INFO][4187] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0", GenerateName:"calico-kube-controllers-778756694d-", Namespace:"calico-system", SelfLink:"", UID:"c9ccbdc0-7a3b-420c-9200-91bd3b896e9d", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"778756694d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327", Pod:"calico-kube-controllers-778756694d-5pt4t", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8ff78795dd0", MAC:"e2:95:60:64:95:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:16.987832 containerd[1633]: 2025-10-30 00:04:16.978 [INFO][4187] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" Namespace="calico-system" Pod="calico-kube-controllers-778756694d-5pt4t" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--778756694d--5pt4t-eth0" Oct 30 00:04:17.074925 systemd-networkd[1516]: cali3bbf7700f9d: Link UP Oct 30 00:04:17.077662 systemd-networkd[1516]: cali3bbf7700f9d: Gained carrier Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.521 [INFO][4304] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.631 [INFO][4304] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--85cf958ddd--dqv9z-eth0 whisker-85cf958ddd- calico-system c1be5870-aa7d-44d6-8228-72dd5ed8c5f5 958 0 2025-10-30 00:04:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85cf958ddd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-85cf958ddd-dqv9z eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3bbf7700f9d [] [] }} ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.631 [INFO][4304] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.811 [INFO][4403] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" HandleID="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Workload="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.811 [INFO][4403] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" HandleID="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Workload="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004b0c80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-85cf958ddd-dqv9z", "timestamp":"2025-10-30 00:04:16.811752917 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.812 [INFO][4403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.949 [INFO][4403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.949 [INFO][4403] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.973 [INFO][4403] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:16.996 [INFO][4403] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.007 [INFO][4403] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.012 [INFO][4403] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.016 [INFO][4403] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.016 [INFO][4403] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.026 [INFO][4403] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.033 [INFO][4403] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.046 [INFO][4403] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.046 [INFO][4403] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" host="localhost" Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.046 [INFO][4403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:17.102058 containerd[1633]: 2025-10-30 00:04:17.046 [INFO][4403] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" HandleID="k8s-pod-network.9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Workload="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.061 [INFO][4304] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85cf958ddd--dqv9z-eth0", GenerateName:"whisker-85cf958ddd-", Namespace:"calico-system", SelfLink:"", UID:"c1be5870-aa7d-44d6-8228-72dd5ed8c5f5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cf958ddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-85cf958ddd-dqv9z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3bbf7700f9d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.063 [INFO][4304] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.063 [INFO][4304] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bbf7700f9d ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.072 [INFO][4304] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.072 [INFO][4304] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--85cf958ddd--dqv9z-eth0", GenerateName:"whisker-85cf958ddd-", Namespace:"calico-system", SelfLink:"", UID:"c1be5870-aa7d-44d6-8228-72dd5ed8c5f5", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 4, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85cf958ddd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d", Pod:"whisker-85cf958ddd-dqv9z", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3bbf7700f9d", MAC:"be:bd:03:b9:c7:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:17.102682 containerd[1633]: 2025-10-30 00:04:17.095 [INFO][4304] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" Namespace="calico-system" Pod="whisker-85cf958ddd-dqv9z" WorkloadEndpoint="localhost-k8s-whisker--85cf958ddd--dqv9z-eth0" Oct 30 00:04:17.131840 containerd[1633]: time="2025-10-30T00:04:17.131661140Z" level=info msg="connecting to shim 619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327" address="unix:///run/containerd/s/4d3bee78f989701ab9561f13285a9fb1946952dcbbc6b36a9621bd8f7a600f64" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:17.132775 containerd[1633]: time="2025-10-30T00:04:17.131677882Z" level=info msg="connecting to shim 2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef" address="unix:///run/containerd/s/92fb832a114e71948cb88ce54b148974c01dde9c603847eed20eb0df918dd95d" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:17.132890 containerd[1633]: time="2025-10-30T00:04:17.131731074Z" level=info msg="connecting to shim a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1" address="unix:///run/containerd/s/86e1196c83eb7df7f01e3b7d6e2da977844d0ede8d2dea5ae178f048e08ca1af" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:17.153439 containerd[1633]: time="2025-10-30T00:04:17.153367726Z" level=info msg="connecting to shim 9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d" address="unix:///run/containerd/s/851b646f9106619c2319d3fa6038630e1803818911476377092ba710e144e351" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:17.200202 systemd[1]: Started cri-containerd-9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d.scope - libcontainer container 9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d. Oct 30 00:04:17.208274 systemd[1]: Started cri-containerd-2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef.scope - libcontainer container 2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef. Oct 30 00:04:17.210517 systemd[1]: Started cri-containerd-a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1.scope - libcontainer container a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1. Oct 30 00:04:17.217268 systemd[1]: Started cri-containerd-619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327.scope - libcontainer container 619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327. Oct 30 00:04:17.235012 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:17.245499 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:17.247785 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:17.256781 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:17.312032 systemd-networkd[1516]: vxlan.calico: Link UP Oct 30 00:04:17.312044 systemd-networkd[1516]: vxlan.calico: Gained carrier Oct 30 00:04:17.338019 containerd[1633]: time="2025-10-30T00:04:17.336662734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85cf958ddd-dqv9z,Uid:c1be5870-aa7d-44d6-8228-72dd5ed8c5f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bbeb55718d8cb9535e0de0109f11c884dd32032fe05ba5f9334b8d111191e3d\"" Oct 30 00:04:17.347431 containerd[1633]: time="2025-10-30T00:04:17.347358757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kmqpz,Uid:6d8696ee-1cc2-47c7-9396-66f6dddcfb7a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef\"" Oct 30 00:04:17.351109 containerd[1633]: time="2025-10-30T00:04:17.351051117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:04:17.357204 kubelet[2857]: E1030 00:04:17.356526 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:17.363804 containerd[1633]: time="2025-10-30T00:04:17.362054377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-7dwtp,Uid:76c98e53-eb5b-4690-b648-f39ba68c3761,Namespace:calico-system,Attempt:0,} returns sandbox id \"a97a5b38379e82d3d8a7183acc7d1cb89d4c8e407407f7dbe9bfe8099a6ff4f1\"" Oct 30 00:04:17.364399 containerd[1633]: time="2025-10-30T00:04:17.364359681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-778756694d-5pt4t,Uid:c9ccbdc0-7a3b-420c-9200-91bd3b896e9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"619728d513ee5165171c762e879c87b9f161047a50ad3783e3694120d42ca327\"" Oct 30 00:04:17.377107 containerd[1633]: time="2025-10-30T00:04:17.376875767Z" level=info msg="CreateContainer within sandbox \"2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:04:17.403778 containerd[1633]: time="2025-10-30T00:04:17.403619780Z" level=info msg="Container de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:17.415191 containerd[1633]: time="2025-10-30T00:04:17.415111074Z" level=info msg="CreateContainer within sandbox \"2918dfa8067012a91e450f49dc1c6bdec236fbe69aaf6c466f84dd8d99c088ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762\"" Oct 30 00:04:17.419207 containerd[1633]: time="2025-10-30T00:04:17.419135580Z" level=info msg="StartContainer for \"de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762\"" Oct 30 00:04:17.421442 containerd[1633]: time="2025-10-30T00:04:17.421377544Z" level=info msg="connecting to shim de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762" address="unix:///run/containerd/s/92fb832a114e71948cb88ce54b148974c01dde9c603847eed20eb0df918dd95d" protocol=ttrpc version=3 Oct 30 00:04:17.457896 systemd[1]: Started cri-containerd-de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762.scope - libcontainer container de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762. Oct 30 00:04:17.703950 containerd[1633]: time="2025-10-30T00:04:17.703680271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:17.809043 containerd[1633]: time="2025-10-30T00:04:17.808993544Z" level=info msg="StartContainer for \"de7d235d5f7933ef692eab435e36015bc35da91f2af2f2d5f471a3fc00b39762\" returns successfully" Oct 30 00:04:17.814350 kubelet[2857]: E1030 00:04:17.814310 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:17.864618 containerd[1633]: time="2025-10-30T00:04:17.864321811Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:04:17.874150 containerd[1633]: time="2025-10-30T00:04:17.874053767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:04:17.874424 kubelet[2857]: E1030 00:04:17.874366 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:17.874424 kubelet[2857]: E1030 00:04:17.874430 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:17.874760 kubelet[2857]: E1030 00:04:17.874720 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:17.874843 containerd[1633]: time="2025-10-30T00:04:17.874807640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:04:17.915968 systemd-networkd[1516]: calie5e1b2cdb5f: Gained IPv6LL Oct 30 00:04:17.926147 kubelet[2857]: I1030 00:04:17.925956 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kmqpz" podStartSLOduration=53.925939432 podStartE2EDuration="53.925939432s" podCreationTimestamp="2025-10-30 00:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:17.925017196 +0000 UTC m=+58.281526963" watchObservedRunningTime="2025-10-30 00:04:17.925939432 +0000 UTC m=+58.282449199" Oct 30 00:04:18.235801 systemd-networkd[1516]: cali4aed3629613: Gained IPv6LL Oct 30 00:04:18.250708 containerd[1633]: time="2025-10-30T00:04:18.250596675Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:18.554384 containerd[1633]: time="2025-10-30T00:04:18.553976549Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:04:18.554384 containerd[1633]: time="2025-10-30T00:04:18.554184777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:04:18.555584 kubelet[2857]: E1030 00:04:18.554734 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:04:18.555584 kubelet[2857]: E1030 00:04:18.554841 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:04:18.555584 kubelet[2857]: E1030 00:04:18.555225 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-778756694d-5pt4t_calico-system(c9ccbdc0-7a3b-420c-9200-91bd3b896e9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:18.555584 kubelet[2857]: E1030 00:04:18.555293 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:04:18.556560 containerd[1633]: time="2025-10-30T00:04:18.555966187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:04:18.814082 systemd-networkd[1516]: cali8ff78795dd0: Gained IPv6LL Oct 30 00:04:18.817376 systemd-networkd[1516]: vxlan.calico: Gained IPv6LL Oct 30 00:04:18.818629 kubelet[2857]: E1030 00:04:18.818467 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:18.821403 kubelet[2857]: E1030 00:04:18.821316 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:04:18.877857 systemd-networkd[1516]: cali3bbf7700f9d: Gained IPv6LL Oct 30 00:04:18.948487 containerd[1633]: time="2025-10-30T00:04:18.948395756Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:18.958362 containerd[1633]: time="2025-10-30T00:04:18.958265057Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:04:18.958588 containerd[1633]: time="2025-10-30T00:04:18.958371550Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:18.958824 kubelet[2857]: E1030 00:04:18.958589 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:04:18.958944 kubelet[2857]: E1030 00:04:18.958829 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:04:18.959695 kubelet[2857]: E1030 00:04:18.959061 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-7dwtp_calico-system(76c98e53-eb5b-4690-b648-f39ba68c3761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:18.959695 kubelet[2857]: E1030 00:04:18.959121 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:04:18.959846 containerd[1633]: time="2025-10-30T00:04:18.959277114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:04:19.331965 containerd[1633]: time="2025-10-30T00:04:19.331888402Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:19.471364 containerd[1633]: time="2025-10-30T00:04:19.471235867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:04:19.471364 containerd[1633]: time="2025-10-30T00:04:19.471313454Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:04:19.471804 kubelet[2857]: E1030 00:04:19.471736 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:19.471804 kubelet[2857]: E1030 00:04:19.471800 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:19.471933 kubelet[2857]: E1030 00:04:19.471886 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:19.471975 kubelet[2857]: E1030 00:04:19.471928 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:04:19.821590 kubelet[2857]: E1030 00:04:19.821518 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:04:19.822347 kubelet[2857]: E1030 00:04:19.821874 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:04:21.892796 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:47574.service - OpenSSH per-connection server daemon (10.0.0.1:47574). Oct 30 00:04:21.985407 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 47574 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:21.988205 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:21.994168 systemd-logind[1613]: New session 11 of user core. Oct 30 00:04:22.004803 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 30 00:04:22.180065 sshd[4783]: Connection closed by 10.0.0.1 port 47574 Oct 30 00:04:22.180344 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:22.185825 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:47574.service: Deactivated successfully. Oct 30 00:04:22.188343 systemd[1]: session-11.scope: Deactivated successfully. Oct 30 00:04:22.189404 systemd-logind[1613]: Session 11 logged out. Waiting for processes to exit. Oct 30 00:04:22.191365 systemd-logind[1613]: Removed session 11. Oct 30 00:04:24.801873 containerd[1633]: time="2025-10-30T00:04:24.801808022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:04:24.893800 containerd[1633]: time="2025-10-30T00:04:24.893723819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,}" Oct 30 00:04:25.141006 systemd-networkd[1516]: calie3cb0525093: Link UP Oct 30 00:04:25.141526 systemd-networkd[1516]: calie3cb0525093: Gained carrier Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:24.961 [INFO][4807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0 calico-apiserver-54f47b4cdd- calico-apiserver f4d9bc01-6958-4087-977b-6989585a84eb 858 0 2025-10-30 00:03:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f47b4cdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f47b4cdd-94wcm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie3cb0525093 [] [] }} ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:24.961 [INFO][4807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.002 [INFO][4818] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" HandleID="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.003 [INFO][4818] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" HandleID="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000443720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f47b4cdd-94wcm", "timestamp":"2025-10-30 00:04:25.002916085 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.003 [INFO][4818] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.003 [INFO][4818] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.003 [INFO][4818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.036 [INFO][4818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.042 [INFO][4818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.048 [INFO][4818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.050 [INFO][4818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.054 [INFO][4818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.054 [INFO][4818] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.056 [INFO][4818] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72 Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.079 [INFO][4818] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.134 [INFO][4818] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.134 [INFO][4818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" host="localhost" Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.134 [INFO][4818] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:25.260764 containerd[1633]: 2025-10-30 00:04:25.134 [INFO][4818] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" HandleID="k8s-pod-network.5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.138 [INFO][4807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0", GenerateName:"calico-apiserver-54f47b4cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4d9bc01-6958-4087-977b-6989585a84eb", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f47b4cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f47b4cdd-94wcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3cb0525093", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.138 [INFO][4807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.138 [INFO][4807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie3cb0525093 ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.141 [INFO][4807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.141 [INFO][4807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0", GenerateName:"calico-apiserver-54f47b4cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"f4d9bc01-6958-4087-977b-6989585a84eb", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f47b4cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72", Pod:"calico-apiserver-54f47b4cdd-94wcm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie3cb0525093", MAC:"6a:cb:dc:92:c5:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:25.261650 containerd[1633]: 2025-10-30 00:04:25.256 [INFO][4807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-94wcm" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--94wcm-eth0" Oct 30 00:04:25.396341 systemd-networkd[1516]: cali24511f4f9cc: Link UP Oct 30 00:04:25.397043 systemd-networkd[1516]: cali24511f4f9cc: Gained carrier Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.256 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bmzt2-eth0 csi-node-driver- calico-system dac688c3-f50b-4d08-95db-f1aa2487f334 724 0 2025-10-30 00:03:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bmzt2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali24511f4f9cc [] [] }} ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.256 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.289 [INFO][4848] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" HandleID="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Workload="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.289 [INFO][4848] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" HandleID="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Workload="localhost-k8s-csi--node--driver--bmzt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b93f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bmzt2", "timestamp":"2025-10-30 00:04:25.289538992 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.289 [INFO][4848] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.289 [INFO][4848] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.289 [INFO][4848] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.297 [INFO][4848] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.301 [INFO][4848] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.305 [INFO][4848] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.307 [INFO][4848] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.309 [INFO][4848] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.309 [INFO][4848] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.310 [INFO][4848] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4 Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.339 [INFO][4848] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.388 [INFO][4848] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.388 [INFO][4848] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" host="localhost" Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.388 [INFO][4848] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:25.633655 containerd[1633]: 2025-10-30 00:04:25.388 [INFO][4848] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" HandleID="k8s-pod-network.791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Workload="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.392 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bmzt2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dac688c3-f50b-4d08-95db-f1aa2487f334", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bmzt2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24511f4f9cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.392 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.392 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24511f4f9cc ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.398 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.400 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bmzt2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"dac688c3-f50b-4d08-95db-f1aa2487f334", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4", Pod:"csi-node-driver-bmzt2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali24511f4f9cc", MAC:"ca:47:9c:8d:30:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:25.634411 containerd[1633]: 2025-10-30 00:04:25.627 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" Namespace="calico-system" Pod="csi-node-driver-bmzt2" WorkloadEndpoint="localhost-k8s-csi--node--driver--bmzt2-eth0" Oct 30 00:04:25.869629 kubelet[2857]: E1030 00:04:25.869519 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:25.870932 containerd[1633]: time="2025-10-30T00:04:25.870785697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,}" Oct 30 00:04:25.930017 containerd[1633]: time="2025-10-30T00:04:25.929901010Z" level=info msg="connecting to shim 5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72" address="unix:///run/containerd/s/78d1c53d55b4215fbff62de25f0f742859267bcf88eee4c57b45ed76edbf095a" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:25.977030 systemd[1]: Started cri-containerd-5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72.scope - libcontainer container 5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72. Oct 30 00:04:25.999767 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:26.076810 containerd[1633]: time="2025-10-30T00:04:26.076727865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-94wcm,Uid:f4d9bc01-6958-4087-977b-6989585a84eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5228362ca4c2cc5cc283060e821abbd637ddbcbbfc3ef4d521de998419371f72\"" Oct 30 00:04:26.080214 containerd[1633]: time="2025-10-30T00:04:26.080161186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:04:26.310537 containerd[1633]: time="2025-10-30T00:04:26.310472041Z" level=info msg="connecting to shim 791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4" address="unix:///run/containerd/s/bff7bc6bcf877dc7eae94159b4e7dce19247892f7f5ef842f8f132b5d1b1eafe" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:26.345478 systemd-networkd[1516]: calia5e8f64bab6: Link UP Oct 30 00:04:26.346061 systemd-networkd[1516]: calia5e8f64bab6: Gained carrier Oct 30 00:04:26.347136 systemd[1]: Started cri-containerd-791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4.scope - libcontainer container 791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4. Oct 30 00:04:26.368364 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:26.406351 containerd[1633]: time="2025-10-30T00:04:26.406291055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bmzt2,Uid:dac688c3-f50b-4d08-95db-f1aa2487f334,Namespace:calico-system,Attempt:0,} returns sandbox id \"791cc71ac5b52ffdac09749f916899b6d29cc2f4588365e2bb21bcb53c132ba4\"" Oct 30 00:04:26.427806 systemd-networkd[1516]: calie3cb0525093: Gained IPv6LL Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.133 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--666sp-eth0 coredns-66bc5c9577- kube-system 54eb03ba-2fa6-4d86-891f-3330d4c9a1a2 857 0 2025-10-30 00:03:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-666sp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia5e8f64bab6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.133 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.194 [INFO][4931] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" HandleID="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Workload="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.194 [INFO][4931] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" HandleID="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Workload="localhost-k8s-coredns--66bc5c9577--666sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cced0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-666sp", "timestamp":"2025-10-30 00:04:26.194561297 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.194 [INFO][4931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.194 [INFO][4931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.194 [INFO][4931] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.209 [INFO][4931] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.268 [INFO][4931] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.274 [INFO][4931] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.276 [INFO][4931] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.278 [INFO][4931] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.278 [INFO][4931] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.280 [INFO][4931] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.291 [INFO][4931] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.334 [INFO][4931] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.334 [INFO][4931] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" host="localhost" Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.334 [INFO][4931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:26.473542 containerd[1633]: 2025-10-30 00:04:26.334 [INFO][4931] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" HandleID="k8s-pod-network.4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Workload="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.339 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--666sp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54eb03ba-2fa6-4d86-891f-3330d4c9a1a2", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-666sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5e8f64bab6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.339 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.340 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5e8f64bab6 ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.353 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.354 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--666sp-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"54eb03ba-2fa6-4d86-891f-3330d4c9a1a2", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a", Pod:"coredns-66bc5c9577-666sp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia5e8f64bab6", MAC:"a6:29:4f:7c:e0:ad", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:26.474221 containerd[1633]: 2025-10-30 00:04:26.470 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" Namespace="kube-system" Pod="coredns-66bc5c9577-666sp" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--666sp-eth0" Oct 30 00:04:26.576876 containerd[1633]: time="2025-10-30T00:04:26.576738276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:26.730820 containerd[1633]: time="2025-10-30T00:04:26.730740587Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:04:26.731014 containerd[1633]: time="2025-10-30T00:04:26.730811372Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:26.731092 kubelet[2857]: E1030 00:04:26.731044 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:26.731223 kubelet[2857]: E1030 00:04:26.731109 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:26.731662 kubelet[2857]: E1030 00:04:26.731407 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:26.731662 kubelet[2857]: E1030 00:04:26.731514 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:26.731795 containerd[1633]: time="2025-10-30T00:04:26.731506478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:04:26.839323 kubelet[2857]: E1030 00:04:26.838740 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:26.876822 systemd-networkd[1516]: cali24511f4f9cc: Gained IPv6LL Oct 30 00:04:27.021326 containerd[1633]: time="2025-10-30T00:04:27.021275741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,}" Oct 30 00:04:27.195271 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:47580.service - OpenSSH per-connection server daemon (10.0.0.1:47580). Oct 30 00:04:27.260136 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:27.262118 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:27.267131 systemd-logind[1613]: New session 12 of user core. Oct 30 00:04:27.271776 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 30 00:04:27.279170 containerd[1633]: time="2025-10-30T00:04:27.279121166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:27.334809 containerd[1633]: time="2025-10-30T00:04:27.334749361Z" level=info msg="connecting to shim 4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a" address="unix:///run/containerd/s/cd78b8af30df12c357e7627bc5787d55292ced556c81603e3b7b2832c6e84b57" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:27.370865 containerd[1633]: time="2025-10-30T00:04:27.370768000Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:04:27.372150 containerd[1633]: time="2025-10-30T00:04:27.372092627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:04:27.372856 kubelet[2857]: E1030 00:04:27.372771 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:04:27.373223 kubelet[2857]: E1030 00:04:27.372873 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:04:27.373223 kubelet[2857]: E1030 00:04:27.372992 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:27.377290 containerd[1633]: time="2025-10-30T00:04:27.377244755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:04:27.379851 systemd[1]: Started cri-containerd-4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a.scope - libcontainer container 4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a. Oct 30 00:04:27.404735 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:27.581332 sshd[4999]: Connection closed by 10.0.0.1 port 47580 Oct 30 00:04:27.581840 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:27.588364 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:47580.service: Deactivated successfully. Oct 30 00:04:27.590789 systemd[1]: session-12.scope: Deactivated successfully. Oct 30 00:04:27.591748 systemd-logind[1613]: Session 12 logged out. Waiting for processes to exit. Oct 30 00:04:27.593263 systemd-logind[1613]: Removed session 12. Oct 30 00:04:27.661961 containerd[1633]: time="2025-10-30T00:04:27.661892336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-666sp,Uid:54eb03ba-2fa6-4d86-891f-3330d4c9a1a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a\"" Oct 30 00:04:27.663075 kubelet[2857]: E1030 00:04:27.663021 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:27.673956 systemd-networkd[1516]: cali3a564a54d87: Link UP Oct 30 00:04:27.674241 systemd-networkd[1516]: cali3a564a54d87: Gained carrier Oct 30 00:04:27.702649 containerd[1633]: time="2025-10-30T00:04:27.702394158Z" level=info msg="CreateContainer within sandbox \"4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 30 00:04:27.707859 systemd-networkd[1516]: calia5e8f64bab6: Gained IPv6LL Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.340 [INFO][5001] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0 calico-apiserver-54f47b4cdd- calico-apiserver 676e7fcb-c57d-4b5d-87fb-71a75d798467 859 0 2025-10-30 00:03:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:54f47b4cdd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-54f47b4cdd-ltgss eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3a564a54d87 [] [] }} ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.340 [INFO][5001] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.394 [INFO][5044] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" HandleID="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.394 [INFO][5044] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" HandleID="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-54f47b4cdd-ltgss", "timestamp":"2025-10-30 00:04:27.394138561 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.394 [INFO][5044] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.395 [INFO][5044] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.395 [INFO][5044] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.445 [INFO][5044] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.485 [INFO][5044] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.574 [INFO][5044] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.577 [INFO][5044] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.579 [INFO][5044] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.579 [INFO][5044] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.581 [INFO][5044] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837 Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.617 [INFO][5044] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.664 [INFO][5044] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.664 [INFO][5044] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" host="localhost" Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.664 [INFO][5044] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Oct 30 00:04:27.734513 containerd[1633]: 2025-10-30 00:04:27.664 [INFO][5044] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" HandleID="k8s-pod-network.48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Workload="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.669 [INFO][5001] cni-plugin/k8s.go 418: Populated endpoint ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0", GenerateName:"calico-apiserver-54f47b4cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"676e7fcb-c57d-4b5d-87fb-71a75d798467", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f47b4cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-54f47b4cdd-ltgss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a564a54d87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.669 [INFO][5001] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.669 [INFO][5001] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a564a54d87 ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.674 [INFO][5001] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.675 [INFO][5001] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0", GenerateName:"calico-apiserver-54f47b4cdd-", Namespace:"calico-apiserver", SelfLink:"", UID:"676e7fcb-c57d-4b5d-87fb-71a75d798467", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2025, time.October, 30, 0, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"54f47b4cdd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837", Pod:"calico-apiserver-54f47b4cdd-ltgss", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3a564a54d87", MAC:"22:38:b3:69:7c:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Oct 30 00:04:27.735380 containerd[1633]: 2025-10-30 00:04:27.728 [INFO][5001] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" Namespace="calico-apiserver" Pod="calico-apiserver-54f47b4cdd-ltgss" WorkloadEndpoint="localhost-k8s-calico--apiserver--54f47b4cdd--ltgss-eth0" Oct 30 00:04:27.796875 containerd[1633]: time="2025-10-30T00:04:27.796820544Z" level=info msg="Container 464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15: CDI devices from CRI Config.CDIDevices: []" Oct 30 00:04:27.842097 kubelet[2857]: E1030 00:04:27.841928 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:27.925969 containerd[1633]: time="2025-10-30T00:04:27.925904384Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:28.069710 containerd[1633]: time="2025-10-30T00:04:28.069625543Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:04:28.070192 containerd[1633]: time="2025-10-30T00:04:28.069735232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:04:28.070226 kubelet[2857]: E1030 00:04:28.069923 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:04:28.070226 kubelet[2857]: E1030 00:04:28.069976 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:04:28.070226 kubelet[2857]: E1030 00:04:28.070061 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:28.070360 kubelet[2857]: E1030 00:04:28.070104 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:04:28.212528 containerd[1633]: time="2025-10-30T00:04:28.212380183Z" level=info msg="CreateContainer within sandbox \"4f6a27d343c2a681f43227ca9c16a2bd6bf1797766d78fa9b15ce82cad0b083a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15\"" Oct 30 00:04:28.213320 containerd[1633]: time="2025-10-30T00:04:28.213215065Z" level=info msg="StartContainer for \"464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15\"" Oct 30 00:04:28.214376 containerd[1633]: time="2025-10-30T00:04:28.214348687Z" level=info msg="connecting to shim 464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15" address="unix:///run/containerd/s/cd78b8af30df12c357e7627bc5787d55292ced556c81603e3b7b2832c6e84b57" protocol=ttrpc version=3 Oct 30 00:04:28.242788 systemd[1]: Started cri-containerd-464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15.scope - libcontainer container 464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15. Oct 30 00:04:28.435055 containerd[1633]: time="2025-10-30T00:04:28.434934390Z" level=info msg="StartContainer for \"464a7b46d22da1e8a51891e766cdb362e2238cf3f8e9b82941387b6006b96e15\" returns successfully" Oct 30 00:04:28.654548 containerd[1633]: time="2025-10-30T00:04:28.654486702Z" level=info msg="connecting to shim 48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837" address="unix:///run/containerd/s/805433677e44a868efc89ad6e7c49f599b68abac4d87cc2a43b1242532e2e185" namespace=k8s.io protocol=ttrpc version=3 Oct 30 00:04:28.688812 systemd[1]: Started cri-containerd-48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837.scope - libcontainer container 48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837. Oct 30 00:04:28.705938 systemd-resolved[1304]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 30 00:04:28.774791 containerd[1633]: time="2025-10-30T00:04:28.774696490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-54f47b4cdd-ltgss,Uid:676e7fcb-c57d-4b5d-87fb-71a75d798467,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"48c939db6c206883db88735d9a981b59c798bb7aad1d7fdd1ca4bfaa7dbac837\"" Oct 30 00:04:28.776784 containerd[1633]: time="2025-10-30T00:04:28.776712735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:04:28.820922 kubelet[2857]: E1030 00:04:28.820825 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:28.845973 kubelet[2857]: E1030 00:04:28.845908 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:28.847615 kubelet[2857]: E1030 00:04:28.847549 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:04:29.114867 containerd[1633]: time="2025-10-30T00:04:29.114766189Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:29.223160 containerd[1633]: time="2025-10-30T00:04:29.223072433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:04:29.223358 containerd[1633]: time="2025-10-30T00:04:29.223150552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:29.223456 kubelet[2857]: E1030 00:04:29.223403 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:29.223509 kubelet[2857]: E1030 00:04:29.223462 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:29.223636 kubelet[2857]: E1030 00:04:29.223584 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:29.223681 kubelet[2857]: E1030 00:04:29.223658 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:04:29.435819 systemd-networkd[1516]: cali3a564a54d87: Gained IPv6LL Oct 30 00:04:29.736536 kubelet[2857]: E1030 00:04:29.736372 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:29.822295 kubelet[2857]: I1030 00:04:29.822213 2857 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-666sp" podStartSLOduration=65.822189453 podStartE2EDuration="1m5.822189453s" podCreationTimestamp="2025-10-30 00:03:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-30 00:04:29.822153685 +0000 UTC m=+70.178663452" watchObservedRunningTime="2025-10-30 00:04:29.822189453 +0000 UTC m=+70.178699210" Oct 30 00:04:29.849755 kubelet[2857]: E1030 00:04:29.849685 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:29.850403 kubelet[2857]: E1030 00:04:29.850361 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:04:30.736822 containerd[1633]: time="2025-10-30T00:04:30.736769259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:04:30.851949 kubelet[2857]: E1030 00:04:30.851904 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:31.196426 containerd[1633]: time="2025-10-30T00:04:31.196350660Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:31.359024 containerd[1633]: time="2025-10-30T00:04:31.358917873Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:04:31.359202 containerd[1633]: time="2025-10-30T00:04:31.359052159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:04:31.359473 kubelet[2857]: E1030 00:04:31.359313 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:04:31.359473 kubelet[2857]: E1030 00:04:31.359382 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:04:31.359473 kubelet[2857]: E1030 00:04:31.359470 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-778756694d-5pt4t_calico-system(c9ccbdc0-7a3b-420c-9200-91bd3b896e9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:31.359785 kubelet[2857]: E1030 00:04:31.359535 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:04:32.598748 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:58636.service - OpenSSH per-connection server daemon (10.0.0.1:58636). Oct 30 00:04:32.667986 sshd[5192]: Accepted publickey for core from 10.0.0.1 port 58636 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:32.669906 sshd-session[5192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:32.676709 systemd-logind[1613]: New session 13 of user core. Oct 30 00:04:32.681847 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 30 00:04:32.739132 containerd[1633]: time="2025-10-30T00:04:32.738783836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:04:32.825825 sshd[5195]: Connection closed by 10.0.0.1 port 58636 Oct 30 00:04:32.828338 sshd-session[5192]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:32.833803 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:58636.service: Deactivated successfully. Oct 30 00:04:32.836202 systemd[1]: session-13.scope: Deactivated successfully. Oct 30 00:04:32.837308 systemd-logind[1613]: Session 13 logged out. Waiting for processes to exit. Oct 30 00:04:32.838581 systemd-logind[1613]: Removed session 13. Oct 30 00:04:33.135071 containerd[1633]: time="2025-10-30T00:04:33.134738682Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:33.140175 containerd[1633]: time="2025-10-30T00:04:33.137915452Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:04:33.140175 containerd[1633]: time="2025-10-30T00:04:33.138035470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:33.140378 kubelet[2857]: E1030 00:04:33.138266 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:04:33.140378 kubelet[2857]: E1030 00:04:33.138330 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:04:33.140378 kubelet[2857]: E1030 00:04:33.138429 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-7dwtp_calico-system(76c98e53-eb5b-4690-b648-f39ba68c3761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:33.140378 kubelet[2857]: E1030 00:04:33.138472 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:04:33.739552 containerd[1633]: time="2025-10-30T00:04:33.738729084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:04:34.088484 containerd[1633]: time="2025-10-30T00:04:34.088420314Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:34.090085 containerd[1633]: time="2025-10-30T00:04:34.090046800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:04:34.090156 containerd[1633]: time="2025-10-30T00:04:34.090076477Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:04:34.090319 kubelet[2857]: E1030 00:04:34.090274 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:34.090319 kubelet[2857]: E1030 00:04:34.090323 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:04:34.090491 kubelet[2857]: E1030 00:04:34.090421 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:34.092128 containerd[1633]: time="2025-10-30T00:04:34.092100370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:04:34.456381 containerd[1633]: time="2025-10-30T00:04:34.456055151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:34.458400 containerd[1633]: time="2025-10-30T00:04:34.458129681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:04:34.458400 containerd[1633]: time="2025-10-30T00:04:34.458210073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:04:34.458498 kubelet[2857]: E1030 00:04:34.458441 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:34.459008 kubelet[2857]: E1030 00:04:34.458510 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:04:34.459008 kubelet[2857]: E1030 00:04:34.458656 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:34.459008 kubelet[2857]: E1030 00:04:34.458715 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:04:37.848902 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:58646.service - OpenSSH per-connection server daemon (10.0.0.1:58646). Oct 30 00:04:37.909882 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 58646 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:37.912037 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:37.917645 systemd-logind[1613]: New session 14 of user core. Oct 30 00:04:37.929917 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 30 00:04:38.077126 sshd[5222]: Connection closed by 10.0.0.1 port 58646 Oct 30 00:04:38.077539 sshd-session[5219]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:38.083591 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:58646.service: Deactivated successfully. Oct 30 00:04:38.086364 systemd[1]: session-14.scope: Deactivated successfully. Oct 30 00:04:38.087551 systemd-logind[1613]: Session 14 logged out. Waiting for processes to exit. Oct 30 00:04:38.089448 systemd-logind[1613]: Removed session 14. Oct 30 00:04:39.738275 containerd[1633]: time="2025-10-30T00:04:39.738021930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:04:40.102634 containerd[1633]: time="2025-10-30T00:04:40.102540886Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:40.205872 containerd[1633]: time="2025-10-30T00:04:40.205784011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:04:40.206047 containerd[1633]: time="2025-10-30T00:04:40.205831672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:04:40.206304 kubelet[2857]: E1030 00:04:40.206208 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:04:40.206304 kubelet[2857]: E1030 00:04:40.206293 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:04:40.207041 kubelet[2857]: E1030 00:04:40.206426 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:40.207829 containerd[1633]: time="2025-10-30T00:04:40.207771590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:04:40.564800 containerd[1633]: time="2025-10-30T00:04:40.564720406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:40.589040 containerd[1633]: time="2025-10-30T00:04:40.588896825Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:04:40.589271 containerd[1633]: time="2025-10-30T00:04:40.588935308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:04:40.590043 kubelet[2857]: E1030 00:04:40.589338 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:04:40.590043 kubelet[2857]: E1030 00:04:40.590006 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:04:40.590140 kubelet[2857]: E1030 00:04:40.590129 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:40.591420 kubelet[2857]: E1030 00:04:40.590522 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:04:40.738328 containerd[1633]: time="2025-10-30T00:04:40.737989850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:04:41.170122 containerd[1633]: time="2025-10-30T00:04:41.170030427Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:41.177769 containerd[1633]: time="2025-10-30T00:04:41.177694484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:04:41.177769 containerd[1633]: time="2025-10-30T00:04:41.177735091Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:41.177998 kubelet[2857]: E1030 00:04:41.177940 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:41.177998 kubelet[2857]: E1030 00:04:41.177990 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:41.178108 kubelet[2857]: E1030 00:04:41.178080 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:41.178144 kubelet[2857]: E1030 00:04:41.178116 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:42.736752 kubelet[2857]: E1030 00:04:42.736663 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:42.737587 containerd[1633]: time="2025-10-30T00:04:42.737446211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:04:43.095077 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Oct 30 00:04:43.103657 containerd[1633]: time="2025-10-30T00:04:43.103595218Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:04:43.130136 containerd[1633]: time="2025-10-30T00:04:43.129898100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:04:43.130136 containerd[1633]: time="2025-10-30T00:04:43.129976407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:04:43.130473 kubelet[2857]: E1030 00:04:43.130427 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:43.130676 kubelet[2857]: E1030 00:04:43.130487 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:04:43.130724 kubelet[2857]: E1030 00:04:43.130682 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:04:43.131243 kubelet[2857]: E1030 00:04:43.131213 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:04:43.162422 sshd[5236]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:43.165187 sshd-session[5236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:43.170914 systemd-logind[1613]: New session 15 of user core. Oct 30 00:04:43.180896 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 30 00:04:43.299763 sshd[5239]: Connection closed by 10.0.0.1 port 35742 Oct 30 00:04:43.300138 sshd-session[5236]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:43.310656 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:35742.service: Deactivated successfully. Oct 30 00:04:43.312898 systemd[1]: session-15.scope: Deactivated successfully. Oct 30 00:04:43.313876 systemd-logind[1613]: Session 15 logged out. Waiting for processes to exit. Oct 30 00:04:43.316848 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:35746.service - OpenSSH per-connection server daemon (10.0.0.1:35746). Oct 30 00:04:43.317889 systemd-logind[1613]: Removed session 15. Oct 30 00:04:43.375853 sshd[5254]: Accepted publickey for core from 10.0.0.1 port 35746 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:43.377697 sshd-session[5254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:43.389784 systemd-logind[1613]: New session 16 of user core. Oct 30 00:04:43.397929 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 30 00:04:43.581557 sshd[5257]: Connection closed by 10.0.0.1 port 35746 Oct 30 00:04:43.584276 sshd-session[5254]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:43.594460 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:35746.service: Deactivated successfully. Oct 30 00:04:43.603062 systemd[1]: session-16.scope: Deactivated successfully. Oct 30 00:04:43.605705 systemd-logind[1613]: Session 16 logged out. Waiting for processes to exit. Oct 30 00:04:43.610460 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:35756.service - OpenSSH per-connection server daemon (10.0.0.1:35756). Oct 30 00:04:43.612939 systemd-logind[1613]: Removed session 16. Oct 30 00:04:43.684526 sshd[5269]: Accepted publickey for core from 10.0.0.1 port 35756 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:43.686832 sshd-session[5269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:43.693872 systemd-logind[1613]: New session 17 of user core. Oct 30 00:04:43.701948 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 30 00:04:43.864291 sshd[5272]: Connection closed by 10.0.0.1 port 35756 Oct 30 00:04:43.864788 sshd-session[5269]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:43.870414 systemd-logind[1613]: Session 17 logged out. Waiting for processes to exit. Oct 30 00:04:43.872026 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:35756.service: Deactivated successfully. Oct 30 00:04:43.875085 systemd[1]: session-17.scope: Deactivated successfully. Oct 30 00:04:43.878001 systemd-logind[1613]: Removed session 17. Oct 30 00:04:44.771815 containerd[1633]: time="2025-10-30T00:04:44.771747762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\" id:\"6e0a07562e734d1535225add68b7be8dbe1275ee51f6c4b3e24a93f701f80db4\" pid:5296 exited_at:{seconds:1761782684 nanos:771338334}" Oct 30 00:04:44.774714 kubelet[2857]: E1030 00:04:44.774658 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:46.737921 kubelet[2857]: E1030 00:04:46.737861 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:04:47.737218 kubelet[2857]: E1030 00:04:47.737169 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:47.739228 kubelet[2857]: E1030 00:04:47.738808 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:04:47.739228 kubelet[2857]: E1030 00:04:47.738776 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:04:48.882131 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:35760.service - OpenSSH per-connection server daemon (10.0.0.1:35760). Oct 30 00:04:48.958764 sshd[5314]: Accepted publickey for core from 10.0.0.1 port 35760 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:48.961031 sshd-session[5314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:48.966985 systemd-logind[1613]: New session 18 of user core. Oct 30 00:04:48.972798 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 30 00:04:49.141028 sshd[5317]: Connection closed by 10.0.0.1 port 35760 Oct 30 00:04:49.141505 sshd-session[5314]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:49.148107 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:35760.service: Deactivated successfully. Oct 30 00:04:49.151144 systemd[1]: session-18.scope: Deactivated successfully. Oct 30 00:04:49.152766 systemd-logind[1613]: Session 18 logged out. Waiting for processes to exit. Oct 30 00:04:49.154444 systemd-logind[1613]: Removed session 18. Oct 30 00:04:51.737527 kubelet[2857]: E1030 00:04:51.737451 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:04:52.737107 kubelet[2857]: E1030 00:04:52.736980 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:04:54.161666 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:38146.service - OpenSSH per-connection server daemon (10.0.0.1:38146). Oct 30 00:04:54.251410 sshd[5342]: Accepted publickey for core from 10.0.0.1 port 38146 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:54.253811 sshd-session[5342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:54.259156 systemd-logind[1613]: New session 19 of user core. Oct 30 00:04:54.263866 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 30 00:04:54.406021 sshd[5345]: Connection closed by 10.0.0.1 port 38146 Oct 30 00:04:54.406408 sshd-session[5342]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:54.411052 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:38146.service: Deactivated successfully. Oct 30 00:04:54.413438 systemd[1]: session-19.scope: Deactivated successfully. Oct 30 00:04:54.414436 systemd-logind[1613]: Session 19 logged out. Waiting for processes to exit. Oct 30 00:04:54.415784 systemd-logind[1613]: Removed session 19. Oct 30 00:04:54.736822 kubelet[2857]: E1030 00:04:54.736640 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:04:55.737495 kubelet[2857]: E1030 00:04:55.737249 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:04:59.424934 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:38160.service - OpenSSH per-connection server daemon (10.0.0.1:38160). Oct 30 00:04:59.498287 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 38160 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:04:59.500485 sshd-session[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:04:59.507525 systemd-logind[1613]: New session 20 of user core. Oct 30 00:04:59.518818 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 30 00:04:59.653352 sshd[5370]: Connection closed by 10.0.0.1 port 38160 Oct 30 00:04:59.653766 sshd-session[5367]: pam_unix(sshd:session): session closed for user core Oct 30 00:04:59.659454 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:38160.service: Deactivated successfully. Oct 30 00:04:59.662068 systemd[1]: session-20.scope: Deactivated successfully. Oct 30 00:04:59.663366 systemd-logind[1613]: Session 20 logged out. Waiting for processes to exit. Oct 30 00:04:59.665011 systemd-logind[1613]: Removed session 20. Oct 30 00:05:00.737785 containerd[1633]: time="2025-10-30T00:05:00.737721799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Oct 30 00:05:01.134110 containerd[1633]: time="2025-10-30T00:05:01.134026146Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:01.237860 containerd[1633]: time="2025-10-30T00:05:01.237753158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Oct 30 00:05:01.237860 containerd[1633]: time="2025-10-30T00:05:01.237827799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:01.238190 kubelet[2857]: E1030 00:05:01.238084 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:01.238190 kubelet[2857]: E1030 00:05:01.238152 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Oct 30 00:05:01.238753 kubelet[2857]: E1030 00:05:01.238254 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-7dwtp_calico-system(76c98e53-eb5b-4690-b648-f39ba68c3761): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:01.238753 kubelet[2857]: E1030 00:05:01.238296 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:05:01.738204 containerd[1633]: time="2025-10-30T00:05:01.738030005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Oct 30 00:05:02.086740 containerd[1633]: time="2025-10-30T00:05:02.086644629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:02.088160 containerd[1633]: time="2025-10-30T00:05:02.088070491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Oct 30 00:05:02.088231 containerd[1633]: time="2025-10-30T00:05:02.088128030Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:02.088460 kubelet[2857]: E1030 00:05:02.088372 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:02.088460 kubelet[2857]: E1030 00:05:02.088453 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Oct 30 00:05:02.088627 kubelet[2857]: E1030 00:05:02.088552 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-778756694d-5pt4t_calico-system(c9ccbdc0-7a3b-420c-9200-91bd3b896e9d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:02.088627 kubelet[2857]: E1030 00:05:02.088590 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:05:02.737958 containerd[1633]: time="2025-10-30T00:05:02.737911564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Oct 30 00:05:03.188975 containerd[1633]: time="2025-10-30T00:05:03.188896406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:03.250121 containerd[1633]: time="2025-10-30T00:05:03.250011486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Oct 30 00:05:03.250317 containerd[1633]: time="2025-10-30T00:05:03.250128649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Oct 30 00:05:03.250385 kubelet[2857]: E1030 00:05:03.250324 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:03.250829 kubelet[2857]: E1030 00:05:03.250388 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Oct 30 00:05:03.250829 kubelet[2857]: E1030 00:05:03.250500 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:03.251782 containerd[1633]: time="2025-10-30T00:05:03.251666983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Oct 30 00:05:04.039405 containerd[1633]: time="2025-10-30T00:05:04.039319283Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:04.040768 containerd[1633]: time="2025-10-30T00:05:04.040723883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Oct 30 00:05:04.040842 containerd[1633]: time="2025-10-30T00:05:04.040809816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Oct 30 00:05:04.041071 kubelet[2857]: E1030 00:05:04.041001 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:04.041124 kubelet[2857]: E1030 00:05:04.041071 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Oct 30 00:05:04.041320 kubelet[2857]: E1030 00:05:04.041261 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-85cf958ddd-dqv9z_calico-system(c1be5870-aa7d-44d6-8228-72dd5ed8c5f5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:04.041520 kubelet[2857]: E1030 00:05:04.041327 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:05:04.041646 containerd[1633]: time="2025-10-30T00:05:04.041493902Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:04.405247 containerd[1633]: time="2025-10-30T00:05:04.405019216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:04.448717 containerd[1633]: time="2025-10-30T00:05:04.448640539Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:04.448912 containerd[1633]: time="2025-10-30T00:05:04.448662051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:04.448968 kubelet[2857]: E1030 00:05:04.448915 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:04.448968 kubelet[2857]: E1030 00:05:04.448964 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:04.449434 kubelet[2857]: E1030 00:05:04.449051 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-94wcm_calico-apiserver(f4d9bc01-6958-4087-977b-6989585a84eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:04.449434 kubelet[2857]: E1030 00:05:04.449085 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:05:04.668048 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:47570.service - OpenSSH per-connection server daemon (10.0.0.1:47570). Oct 30 00:05:04.735622 sshd[5383]: Accepted publickey for core from 10.0.0.1 port 47570 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:04.738111 containerd[1633]: time="2025-10-30T00:05:04.737931005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Oct 30 00:05:04.739620 sshd-session[5383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:04.746363 systemd-logind[1613]: New session 21 of user core. Oct 30 00:05:04.751880 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 30 00:05:04.887742 sshd[5386]: Connection closed by 10.0.0.1 port 47570 Oct 30 00:05:04.888185 sshd-session[5383]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:04.904047 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:47570.service: Deactivated successfully. Oct 30 00:05:04.906587 systemd[1]: session-21.scope: Deactivated successfully. Oct 30 00:05:04.907495 systemd-logind[1613]: Session 21 logged out. Waiting for processes to exit. Oct 30 00:05:04.911381 systemd[1]: Started sshd@21-10.0.0.82:22-10.0.0.1:47586.service - OpenSSH per-connection server daemon (10.0.0.1:47586). Oct 30 00:05:04.912308 systemd-logind[1613]: Removed session 21. Oct 30 00:05:04.997745 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 47586 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:04.999568 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:05.004309 systemd-logind[1613]: New session 22 of user core. Oct 30 00:05:05.018818 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 30 00:05:05.157585 containerd[1633]: time="2025-10-30T00:05:05.157496286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:05.170566 containerd[1633]: time="2025-10-30T00:05:05.170423278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Oct 30 00:05:05.170811 containerd[1633]: time="2025-10-30T00:05:05.170454246Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Oct 30 00:05:05.170978 kubelet[2857]: E1030 00:05:05.170840 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:05.170978 kubelet[2857]: E1030 00:05:05.170914 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Oct 30 00:05:05.171134 kubelet[2857]: E1030 00:05:05.171014 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:05.172086 containerd[1633]: time="2025-10-30T00:05:05.171991438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Oct 30 00:05:05.607564 containerd[1633]: time="2025-10-30T00:05:05.607399213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:05.611956 containerd[1633]: time="2025-10-30T00:05:05.611883306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Oct 30 00:05:05.612173 containerd[1633]: time="2025-10-30T00:05:05.611953118Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Oct 30 00:05:05.612767 kubelet[2857]: E1030 00:05:05.612583 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:05.613294 kubelet[2857]: E1030 00:05:05.612778 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Oct 30 00:05:05.613294 kubelet[2857]: E1030 00:05:05.612871 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bmzt2_calico-system(dac688c3-f50b-4d08-95db-f1aa2487f334): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:05.613294 kubelet[2857]: E1030 00:05:05.612937 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:05:05.724705 sshd[5402]: Connection closed by 10.0.0.1 port 47586 Oct 30 00:05:05.725166 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:05.735993 systemd[1]: sshd@21-10.0.0.82:22-10.0.0.1:47586.service: Deactivated successfully. Oct 30 00:05:05.739353 systemd[1]: session-22.scope: Deactivated successfully. Oct 30 00:05:05.740359 systemd-logind[1613]: Session 22 logged out. Waiting for processes to exit. Oct 30 00:05:05.744082 systemd[1]: Started sshd@22-10.0.0.82:22-10.0.0.1:47590.service - OpenSSH per-connection server daemon (10.0.0.1:47590). Oct 30 00:05:05.745551 systemd-logind[1613]: Removed session 22. Oct 30 00:05:05.810312 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 47590 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:05.812466 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:05.818062 systemd-logind[1613]: New session 23 of user core. Oct 30 00:05:05.827792 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 30 00:05:06.418675 sshd[5416]: Connection closed by 10.0.0.1 port 47590 Oct 30 00:05:06.419844 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:06.433535 systemd[1]: sshd@22-10.0.0.82:22-10.0.0.1:47590.service: Deactivated successfully. Oct 30 00:05:06.437239 systemd[1]: session-23.scope: Deactivated successfully. Oct 30 00:05:06.438239 systemd-logind[1613]: Session 23 logged out. Waiting for processes to exit. Oct 30 00:05:06.442783 systemd[1]: Started sshd@23-10.0.0.82:22-10.0.0.1:47604.service - OpenSSH per-connection server daemon (10.0.0.1:47604). Oct 30 00:05:06.445790 systemd-logind[1613]: Removed session 23. Oct 30 00:05:06.519440 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 47604 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:06.521336 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:06.526898 systemd-logind[1613]: New session 24 of user core. Oct 30 00:05:06.536751 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 30 00:05:06.822395 sshd[5436]: Connection closed by 10.0.0.1 port 47604 Oct 30 00:05:06.823756 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:06.836837 systemd[1]: sshd@23-10.0.0.82:22-10.0.0.1:47604.service: Deactivated successfully. Oct 30 00:05:06.840802 systemd[1]: session-24.scope: Deactivated successfully. Oct 30 00:05:06.842068 systemd-logind[1613]: Session 24 logged out. Waiting for processes to exit. Oct 30 00:05:06.844715 systemd-logind[1613]: Removed session 24. Oct 30 00:05:06.846487 systemd[1]: Started sshd@24-10.0.0.82:22-10.0.0.1:47618.service - OpenSSH per-connection server daemon (10.0.0.1:47618). Oct 30 00:05:06.918284 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 47618 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:06.920572 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:06.926756 systemd-logind[1613]: New session 25 of user core. Oct 30 00:05:06.934856 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 30 00:05:07.145454 sshd[5451]: Connection closed by 10.0.0.1 port 47618 Oct 30 00:05:07.146420 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:07.155944 systemd-logind[1613]: Session 25 logged out. Waiting for processes to exit. Oct 30 00:05:07.158000 systemd[1]: sshd@24-10.0.0.82:22-10.0.0.1:47618.service: Deactivated successfully. Oct 30 00:05:07.164018 systemd[1]: session-25.scope: Deactivated successfully. Oct 30 00:05:07.169550 systemd-logind[1613]: Removed session 25. Oct 30 00:05:10.738377 containerd[1633]: time="2025-10-30T00:05:10.738148251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Oct 30 00:05:11.426367 containerd[1633]: time="2025-10-30T00:05:11.426256403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Oct 30 00:05:11.625820 containerd[1633]: time="2025-10-30T00:05:11.625733112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Oct 30 00:05:11.626086 containerd[1633]: time="2025-10-30T00:05:11.625821761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Oct 30 00:05:11.626142 kubelet[2857]: E1030 00:05:11.626026 2857 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:11.626142 kubelet[2857]: E1030 00:05:11.626077 2857 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Oct 30 00:05:11.626582 kubelet[2857]: E1030 00:05:11.626166 2857 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-54f47b4cdd-ltgss_calico-apiserver(676e7fcb-c57d-4b5d-87fb-71a75d798467): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Oct 30 00:05:11.626582 kubelet[2857]: E1030 00:05:11.626203 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:05:12.170835 systemd[1]: Started sshd@25-10.0.0.82:22-10.0.0.1:51232.service - OpenSSH per-connection server daemon (10.0.0.1:51232). Oct 30 00:05:12.233929 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 51232 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:12.236084 sshd-session[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:12.241471 systemd-logind[1613]: New session 26 of user core. Oct 30 00:05:12.250914 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 30 00:05:12.401206 sshd[5472]: Connection closed by 10.0.0.1 port 51232 Oct 30 00:05:12.401826 sshd-session[5469]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:12.407577 systemd[1]: sshd@25-10.0.0.82:22-10.0.0.1:51232.service: Deactivated successfully. Oct 30 00:05:12.410374 systemd[1]: session-26.scope: Deactivated successfully. Oct 30 00:05:12.411403 systemd-logind[1613]: Session 26 logged out. Waiting for processes to exit. Oct 30 00:05:12.413263 systemd-logind[1613]: Removed session 26. Oct 30 00:05:14.724824 containerd[1633]: time="2025-10-30T00:05:14.724769808Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6956619011f4b032055c93357f3467b8616d3ebb0ddb660b9c2ed1e1fce8ce26\" id:\"be88a3985ef2e480e94346950fd718dcb03ba08d85b2a5d7d0b3abab839e13ed\" pid:5496 exited_at:{seconds:1761782714 nanos:724360703}" Oct 30 00:05:15.737773 kubelet[2857]: E1030 00:05:15.737689 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:05:15.739081 kubelet[2857]: E1030 00:05:15.737934 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:05:17.418279 systemd[1]: Started sshd@26-10.0.0.82:22-10.0.0.1:51244.service - OpenSSH per-connection server daemon (10.0.0.1:51244). Oct 30 00:05:17.477637 sshd[5510]: Accepted publickey for core from 10.0.0.1 port 51244 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:17.479318 sshd-session[5510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:17.484236 systemd-logind[1613]: New session 27 of user core. Oct 30 00:05:17.494765 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 30 00:05:17.632505 sshd[5513]: Connection closed by 10.0.0.1 port 51244 Oct 30 00:05:17.632906 sshd-session[5510]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:17.639119 systemd[1]: sshd@26-10.0.0.82:22-10.0.0.1:51244.service: Deactivated successfully. Oct 30 00:05:17.642802 systemd[1]: session-27.scope: Deactivated successfully. Oct 30 00:05:17.644633 systemd-logind[1613]: Session 27 logged out. Waiting for processes to exit. Oct 30 00:05:17.646763 systemd-logind[1613]: Removed session 27. Oct 30 00:05:17.737720 kubelet[2857]: E1030 00:05:17.736474 2857 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 30 00:05:17.739630 kubelet[2857]: E1030 00:05:17.738866 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:05:18.737880 kubelet[2857]: E1030 00:05:18.737798 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:05:19.740284 kubelet[2857]: E1030 00:05:19.740188 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334" Oct 30 00:05:22.647825 systemd[1]: Started sshd@27-10.0.0.82:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). Oct 30 00:05:22.714282 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:22.716315 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:22.723716 systemd-logind[1613]: New session 28 of user core. Oct 30 00:05:22.732801 systemd[1]: Started session-28.scope - Session 28 of User core. Oct 30 00:05:22.886696 sshd[5536]: Connection closed by 10.0.0.1 port 39798 Oct 30 00:05:22.887130 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:22.892520 systemd[1]: sshd@27-10.0.0.82:22-10.0.0.1:39798.service: Deactivated successfully. Oct 30 00:05:22.895285 systemd[1]: session-28.scope: Deactivated successfully. Oct 30 00:05:22.896521 systemd-logind[1613]: Session 28 logged out. Waiting for processes to exit. Oct 30 00:05:22.898022 systemd-logind[1613]: Removed session 28. Oct 30 00:05:23.737292 kubelet[2857]: E1030 00:05:23.737089 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-ltgss" podUID="676e7fcb-c57d-4b5d-87fb-71a75d798467" Oct 30 00:05:27.901865 systemd[1]: Started sshd@28-10.0.0.82:22-10.0.0.1:39810.service - OpenSSH per-connection server daemon (10.0.0.1:39810). Oct 30 00:05:27.967186 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 39810 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:27.969114 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:27.974678 systemd-logind[1613]: New session 29 of user core. Oct 30 00:05:27.983765 systemd[1]: Started session-29.scope - Session 29 of User core. Oct 30 00:05:28.221095 sshd[5556]: Connection closed by 10.0.0.1 port 39810 Oct 30 00:05:28.221351 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:28.227045 systemd[1]: sshd@28-10.0.0.82:22-10.0.0.1:39810.service: Deactivated successfully. Oct 30 00:05:28.229622 systemd[1]: session-29.scope: Deactivated successfully. Oct 30 00:05:28.230421 systemd-logind[1613]: Session 29 logged out. Waiting for processes to exit. Oct 30 00:05:28.231933 systemd-logind[1613]: Removed session 29. Oct 30 00:05:29.738028 kubelet[2857]: E1030 00:05:29.737958 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-778756694d-5pt4t" podUID="c9ccbdc0-7a3b-420c-9200-91bd3b896e9d" Oct 30 00:05:29.738736 kubelet[2857]: E1030 00:05:29.738356 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-7dwtp" podUID="76c98e53-eb5b-4690-b648-f39ba68c3761" Oct 30 00:05:30.737870 kubelet[2857]: E1030 00:05:30.737783 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-54f47b4cdd-94wcm" podUID="f4d9bc01-6958-4087-977b-6989585a84eb" Oct 30 00:05:32.737505 kubelet[2857]: E1030 00:05:32.737432 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-85cf958ddd-dqv9z" podUID="c1be5870-aa7d-44d6-8228-72dd5ed8c5f5" Oct 30 00:05:33.233796 systemd[1]: Started sshd@29-10.0.0.82:22-10.0.0.1:39636.service - OpenSSH per-connection server daemon (10.0.0.1:39636). Oct 30 00:05:33.302208 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 39636 ssh2: RSA SHA256:XtNwpkJ0gwqw3SlNmVlFmsLf+v6wWghXAocmq4xmuyA Oct 30 00:05:33.304366 sshd-session[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 30 00:05:33.309814 systemd-logind[1613]: New session 30 of user core. Oct 30 00:05:33.324832 systemd[1]: Started session-30.scope - Session 30 of User core. Oct 30 00:05:34.033919 sshd[5573]: Connection closed by 10.0.0.1 port 39636 Oct 30 00:05:34.034303 sshd-session[5570]: pam_unix(sshd:session): session closed for user core Oct 30 00:05:34.038895 systemd[1]: sshd@29-10.0.0.82:22-10.0.0.1:39636.service: Deactivated successfully. Oct 30 00:05:34.041345 systemd[1]: session-30.scope: Deactivated successfully. Oct 30 00:05:34.042333 systemd-logind[1613]: Session 30 logged out. Waiting for processes to exit. Oct 30 00:05:34.043820 systemd-logind[1613]: Removed session 30. Oct 30 00:05:34.739286 kubelet[2857]: E1030 00:05:34.739210 2857 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bmzt2" podUID="dac688c3-f50b-4d08-95db-f1aa2487f334"