Nov 1 00:21:38.741151 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Fri Oct 31 22:16:48 -00 2025 Nov 1 00:21:38.741175 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:21:38.741187 kernel: BIOS-provided physical RAM map: Nov 1 00:21:38.741194 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:21:38.741201 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 00:21:38.741208 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 00:21:38.741216 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 00:21:38.741223 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 00:21:38.741233 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 00:21:38.741241 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 00:21:38.741250 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 1 00:21:38.741257 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 00:21:38.741264 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 00:21:38.741271 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 00:21:38.741280 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 00:21:38.741290 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 00:21:38.741300 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 00:21:38.741307 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 00:21:38.741315 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 00:21:38.741323 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 00:21:38.741330 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 00:21:38.741338 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 00:21:38.741345 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 00:21:38.741353 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:21:38.741369 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 00:21:38.741379 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:21:38.741387 kernel: NX (Execute Disable) protection: active Nov 1 00:21:38.741394 kernel: APIC: Static calls initialized Nov 1 00:21:38.741402 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Nov 1 00:21:38.741410 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Nov 1 00:21:38.741417 kernel: extended physical RAM map: Nov 1 00:21:38.741425 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 1 00:21:38.741433 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 1 00:21:38.741441 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 1 00:21:38.741448 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 1 00:21:38.741456 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 1 00:21:38.741466 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 1 00:21:38.741474 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 1 00:21:38.741481 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Nov 1 00:21:38.741489 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Nov 1 00:21:38.741501 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Nov 1 00:21:38.741511 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Nov 1 00:21:38.741519 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Nov 1 00:21:38.741527 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 1 00:21:38.741534 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 1 00:21:38.741542 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 1 00:21:38.741550 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 1 00:21:38.741558 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 1 00:21:38.741566 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 1 00:21:38.741576 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 1 00:21:38.741583 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 1 00:21:38.741591 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 1 00:21:38.741599 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 1 00:21:38.741607 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 1 00:21:38.741617 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 1 00:21:38.741627 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 1 00:21:38.741637 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 1 00:21:38.741646 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 1 00:21:38.741659 kernel: efi: EFI v2.7 by EDK II Nov 1 00:21:38.741669 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 1 00:21:38.741681 kernel: random: crng init done Nov 1 00:21:38.741693 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 1 00:21:38.741704 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 1 00:21:38.741716 kernel: secureboot: Secure boot disabled Nov 1 00:21:38.741727 kernel: SMBIOS 2.8 present. Nov 1 00:21:38.741737 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 1 00:21:38.741747 kernel: DMI: Memory slots populated: 1/1 Nov 1 00:21:38.741757 kernel: Hypervisor detected: KVM Nov 1 00:21:38.741767 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 00:21:38.741777 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 1 00:21:38.741785 kernel: kvm-clock: using sched offset of 5773268968 cycles Nov 1 00:21:38.741797 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 1 00:21:38.741808 kernel: tsc: Detected 2794.748 MHz processor Nov 1 00:21:38.741818 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 1 00:21:38.741826 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 1 00:21:38.741834 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 1 00:21:38.741844 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 1 00:21:38.741855 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 1 00:21:38.741869 kernel: Using GB pages for direct mapping Nov 1 00:21:38.741880 kernel: ACPI: Early table checksum verification disabled Nov 1 00:21:38.741891 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 1 00:21:38.741902 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 1 00:21:38.741913 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741924 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741934 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 1 00:21:38.741945 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741959 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741969 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741980 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 1 00:21:38.741990 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 1 00:21:38.742001 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 1 00:21:38.742011 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 1 00:21:38.742022 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 1 00:21:38.742035 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 1 00:21:38.742046 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 1 00:21:38.742056 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 1 00:21:38.742084 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 1 00:21:38.742094 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 1 00:21:38.742105 kernel: No NUMA configuration found Nov 1 00:21:38.742116 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 1 00:21:38.742130 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 1 00:21:38.742141 kernel: Zone ranges: Nov 1 00:21:38.742152 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 1 00:21:38.742163 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 1 00:21:38.742174 kernel: Normal empty Nov 1 00:21:38.742183 kernel: Device empty Nov 1 00:21:38.742191 kernel: Movable zone start for each node Nov 1 00:21:38.742202 kernel: Early memory node ranges Nov 1 00:21:38.742210 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 1 00:21:38.742221 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 1 00:21:38.742230 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 1 00:21:38.742238 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 1 00:21:38.742246 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 1 00:21:38.742254 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 1 00:21:38.742262 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 1 00:21:38.742272 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 1 00:21:38.742283 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 1 00:21:38.742291 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:21:38.742306 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 1 00:21:38.742317 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 1 00:21:38.742325 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 1 00:21:38.742333 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 1 00:21:38.742342 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 1 00:21:38.742350 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 1 00:21:38.742372 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 1 00:21:38.742380 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 1 00:21:38.742389 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 1 00:21:38.742397 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 1 00:21:38.742408 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 1 00:21:38.742416 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 1 00:21:38.742425 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 1 00:21:38.742433 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 1 00:21:38.742442 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 1 00:21:38.742450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 1 00:21:38.742459 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 1 00:21:38.742469 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 1 00:21:38.742478 kernel: TSC deadline timer available Nov 1 00:21:38.742486 kernel: CPU topo: Max. logical packages: 1 Nov 1 00:21:38.742494 kernel: CPU topo: Max. logical dies: 1 Nov 1 00:21:38.742503 kernel: CPU topo: Max. dies per package: 1 Nov 1 00:21:38.742511 kernel: CPU topo: Max. threads per core: 1 Nov 1 00:21:38.742519 kernel: CPU topo: Num. cores per package: 4 Nov 1 00:21:38.742530 kernel: CPU topo: Num. threads per package: 4 Nov 1 00:21:38.742538 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 1 00:21:38.742547 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 1 00:21:38.742555 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 1 00:21:38.742563 kernel: kvm-guest: setup PV sched yield Nov 1 00:21:38.742572 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 1 00:21:38.742580 kernel: Booting paravirtualized kernel on KVM Nov 1 00:21:38.742589 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 1 00:21:38.742600 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 1 00:21:38.742608 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 1 00:21:38.742617 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 1 00:21:38.742625 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 1 00:21:38.742633 kernel: kvm-guest: PV spinlocks enabled Nov 1 00:21:38.742642 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 1 00:21:38.742653 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:21:38.742664 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 1 00:21:38.742673 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 1 00:21:38.742682 kernel: Fallback order for Node 0: 0 Nov 1 00:21:38.742690 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 1 00:21:38.742698 kernel: Policy zone: DMA32 Nov 1 00:21:38.742707 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 1 00:21:38.742717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 1 00:21:38.742726 kernel: ftrace: allocating 40092 entries in 157 pages Nov 1 00:21:38.742734 kernel: ftrace: allocated 157 pages with 5 groups Nov 1 00:21:38.742742 kernel: Dynamic Preempt: voluntary Nov 1 00:21:38.742751 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 1 00:21:38.742760 kernel: rcu: RCU event tracing is enabled. Nov 1 00:21:38.742768 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 1 00:21:38.742777 kernel: Trampoline variant of Tasks RCU enabled. Nov 1 00:21:38.742788 kernel: Rude variant of Tasks RCU enabled. Nov 1 00:21:38.742796 kernel: Tracing variant of Tasks RCU enabled. Nov 1 00:21:38.742804 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 1 00:21:38.742813 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 1 00:21:38.742823 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:21:38.742832 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:21:38.742841 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 1 00:21:38.742851 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 1 00:21:38.742860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 1 00:21:38.742868 kernel: Console: colour dummy device 80x25 Nov 1 00:21:38.742877 kernel: printk: legacy console [ttyS0] enabled Nov 1 00:21:38.742885 kernel: ACPI: Core revision 20240827 Nov 1 00:21:38.742893 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 1 00:21:38.742902 kernel: APIC: Switch to symmetric I/O mode setup Nov 1 00:21:38.742912 kernel: x2apic enabled Nov 1 00:21:38.742921 kernel: APIC: Switched APIC routing to: physical x2apic Nov 1 00:21:38.742929 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 1 00:21:38.742938 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 1 00:21:38.742946 kernel: kvm-guest: setup PV IPIs Nov 1 00:21:38.742955 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 1 00:21:38.742963 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 00:21:38.742974 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 1 00:21:38.742982 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 1 00:21:38.742991 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 1 00:21:38.742999 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 1 00:21:38.743008 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 1 00:21:38.743016 kernel: Spectre V2 : Mitigation: Retpolines Nov 1 00:21:38.743026 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 1 00:21:38.743039 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 1 00:21:38.743050 kernel: active return thunk: retbleed_return_thunk Nov 1 00:21:38.743079 kernel: RETBleed: Mitigation: untrained return thunk Nov 1 00:21:38.743094 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 1 00:21:38.743105 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 1 00:21:38.743117 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 1 00:21:38.743129 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 1 00:21:38.743144 kernel: active return thunk: srso_return_thunk Nov 1 00:21:38.743155 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 1 00:21:38.743166 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 1 00:21:38.743177 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 1 00:21:38.743188 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 1 00:21:38.743199 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 1 00:21:38.743211 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 1 00:21:38.743225 kernel: Freeing SMP alternatives memory: 32K Nov 1 00:21:38.743236 kernel: pid_max: default: 32768 minimum: 301 Nov 1 00:21:38.743247 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 1 00:21:38.743258 kernel: landlock: Up and running. Nov 1 00:21:38.743269 kernel: SELinux: Initializing. Nov 1 00:21:38.743281 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:21:38.743292 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 1 00:21:38.743306 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 1 00:21:38.743317 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 1 00:21:38.743328 kernel: ... version: 0 Nov 1 00:21:38.743340 kernel: ... bit width: 48 Nov 1 00:21:38.743351 kernel: ... generic registers: 6 Nov 1 00:21:38.743372 kernel: ... value mask: 0000ffffffffffff Nov 1 00:21:38.743384 kernel: ... max period: 00007fffffffffff Nov 1 00:21:38.743400 kernel: ... fixed-purpose events: 0 Nov 1 00:21:38.743411 kernel: ... event mask: 000000000000003f Nov 1 00:21:38.743422 kernel: signal: max sigframe size: 1776 Nov 1 00:21:38.743434 kernel: rcu: Hierarchical SRCU implementation. Nov 1 00:21:38.743446 kernel: rcu: Max phase no-delay instances is 400. Nov 1 00:21:38.743461 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 1 00:21:38.743473 kernel: smp: Bringing up secondary CPUs ... Nov 1 00:21:38.743488 kernel: smpboot: x86: Booting SMP configuration: Nov 1 00:21:38.743500 kernel: .... node #0, CPUs: #1 #2 #3 Nov 1 00:21:38.743511 kernel: smp: Brought up 1 node, 4 CPUs Nov 1 00:21:38.743522 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 1 00:21:38.743535 kernel: Memory: 2445192K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114672K reserved, 0K cma-reserved) Nov 1 00:21:38.743546 kernel: devtmpfs: initialized Nov 1 00:21:38.743557 kernel: x86/mm: Memory block size: 128MB Nov 1 00:21:38.743572 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 1 00:21:38.743583 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 1 00:21:38.743595 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 1 00:21:38.743606 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 1 00:21:38.743618 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 1 00:21:38.743630 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 1 00:21:38.743641 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 1 00:21:38.743656 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 1 00:21:38.743667 kernel: pinctrl core: initialized pinctrl subsystem Nov 1 00:21:38.743679 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 1 00:21:38.743690 kernel: audit: initializing netlink subsys (disabled) Nov 1 00:21:38.743706 kernel: audit: type=2000 audit(1761956495.095:1): state=initialized audit_enabled=0 res=1 Nov 1 00:21:38.743718 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 1 00:21:38.743729 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 1 00:21:38.743744 kernel: cpuidle: using governor menu Nov 1 00:21:38.743755 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 1 00:21:38.743766 kernel: dca service started, version 1.12.1 Nov 1 00:21:38.743778 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 1 00:21:38.743786 kernel: PCI: Using configuration type 1 for base access Nov 1 00:21:38.743795 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 1 00:21:38.743803 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 1 00:21:38.743814 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 1 00:21:38.743823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 1 00:21:38.743831 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 1 00:21:38.743839 kernel: ACPI: Added _OSI(Module Device) Nov 1 00:21:38.743847 kernel: ACPI: Added _OSI(Processor Device) Nov 1 00:21:38.743856 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 1 00:21:38.743864 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 1 00:21:38.743875 kernel: ACPI: Interpreter enabled Nov 1 00:21:38.743883 kernel: ACPI: PM: (supports S0 S3 S5) Nov 1 00:21:38.743891 kernel: ACPI: Using IOAPIC for interrupt routing Nov 1 00:21:38.743900 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 1 00:21:38.743908 kernel: PCI: Using E820 reservations for host bridge windows Nov 1 00:21:38.743917 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 1 00:21:38.743925 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 1 00:21:38.744209 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 1 00:21:38.744497 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 1 00:21:38.744723 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 1 00:21:38.744741 kernel: PCI host bridge to bus 0000:00 Nov 1 00:21:38.744966 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 1 00:21:38.745264 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 1 00:21:38.745455 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 1 00:21:38.745658 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 1 00:21:38.745863 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 1 00:21:38.746090 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 1 00:21:38.746306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 1 00:21:38.746589 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 1 00:21:38.746848 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 1 00:21:38.747111 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 1 00:21:38.747373 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 1 00:21:38.747610 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 1 00:21:38.747845 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 1 00:21:38.748127 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 1 00:21:38.748385 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 1 00:21:38.748624 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 1 00:21:38.748862 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 1 00:21:38.749135 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 1 00:21:38.749463 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 1 00:21:38.749697 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 1 00:21:38.749918 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 1 00:21:38.750187 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 1 00:21:38.750431 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 1 00:21:38.750657 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 1 00:21:38.750889 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 1 00:21:38.751173 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 1 00:21:38.751418 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 1 00:21:38.751638 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 1 00:21:38.751870 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 1 00:21:38.752139 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 1 00:21:38.752388 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 1 00:21:38.752629 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 1 00:21:38.752855 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 1 00:21:38.752873 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 1 00:21:38.752885 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 1 00:21:38.752898 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 1 00:21:38.752916 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 1 00:21:38.752928 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 1 00:21:38.752941 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 1 00:21:38.752953 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 1 00:21:38.752966 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 1 00:21:38.752978 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 1 00:21:38.752990 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 1 00:21:38.753006 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 1 00:21:38.753019 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 1 00:21:38.753032 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 1 00:21:38.753045 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 1 00:21:38.753057 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 1 00:21:38.753091 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 1 00:21:38.753103 kernel: iommu: Default domain type: Translated Nov 1 00:21:38.753120 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 1 00:21:38.753132 kernel: efivars: Registered efivars operations Nov 1 00:21:38.753145 kernel: PCI: Using ACPI for IRQ routing Nov 1 00:21:38.753157 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 1 00:21:38.753170 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 1 00:21:38.753182 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 1 00:21:38.753194 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Nov 1 00:21:38.753209 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Nov 1 00:21:38.753221 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 1 00:21:38.753233 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 1 00:21:38.753246 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 1 00:21:38.753258 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 1 00:21:38.753502 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 1 00:21:38.753729 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 1 00:21:38.753949 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 1 00:21:38.753966 kernel: vgaarb: loaded Nov 1 00:21:38.753978 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 1 00:21:38.753989 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 1 00:21:38.754001 kernel: clocksource: Switched to clocksource kvm-clock Nov 1 00:21:38.754012 kernel: VFS: Disk quotas dquot_6.6.0 Nov 1 00:21:38.754023 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 1 00:21:38.754040 kernel: pnp: PnP ACPI init Nov 1 00:21:38.754350 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 1 00:21:38.754391 kernel: pnp: PnP ACPI: found 6 devices Nov 1 00:21:38.754404 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 1 00:21:38.754417 kernel: NET: Registered PF_INET protocol family Nov 1 00:21:38.754429 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 1 00:21:38.754445 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 1 00:21:38.754457 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 1 00:21:38.754469 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 1 00:21:38.754480 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 1 00:21:38.754492 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 1 00:21:38.754504 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:21:38.754516 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 1 00:21:38.754532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 1 00:21:38.754545 kernel: NET: Registered PF_XDP protocol family Nov 1 00:21:38.754786 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 1 00:21:38.755020 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 1 00:21:38.755226 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 1 00:21:38.755402 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 1 00:21:38.755572 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 1 00:21:38.755763 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 1 00:21:38.755976 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 1 00:21:38.756231 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 1 00:21:38.756250 kernel: PCI: CLS 0 bytes, default 64 Nov 1 00:21:38.756263 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 1 00:21:38.756281 kernel: Initialise system trusted keyrings Nov 1 00:21:38.756293 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 1 00:21:38.756305 kernel: Key type asymmetric registered Nov 1 00:21:38.756318 kernel: Asymmetric key parser 'x509' registered Nov 1 00:21:38.756330 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 1 00:21:38.756345 kernel: io scheduler mq-deadline registered Nov 1 00:21:38.756370 kernel: io scheduler kyber registered Nov 1 00:21:38.756383 kernel: io scheduler bfq registered Nov 1 00:21:38.756395 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 1 00:21:38.756408 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 1 00:21:38.756420 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 1 00:21:38.756433 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 1 00:21:38.756445 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 1 00:21:38.756461 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 1 00:21:38.756473 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 1 00:21:38.756485 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 1 00:21:38.756497 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 1 00:21:38.756727 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 1 00:21:38.756949 kernel: rtc_cmos 00:04: registered as rtc0 Nov 1 00:21:38.756972 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Nov 1 00:21:38.757210 kernel: rtc_cmos 00:04: setting system clock to 2025-11-01T00:21:36 UTC (1761956496) Nov 1 00:21:38.757440 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 1 00:21:38.757459 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 1 00:21:38.757472 kernel: efifb: probing for efifb Nov 1 00:21:38.757484 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 1 00:21:38.757497 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 1 00:21:38.757514 kernel: efifb: scrolling: redraw Nov 1 00:21:38.757526 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 1 00:21:38.757538 kernel: Console: switching to colour frame buffer device 160x50 Nov 1 00:21:38.757550 kernel: fb0: EFI VGA frame buffer device Nov 1 00:21:38.757562 kernel: pstore: Using crash dump compression: deflate Nov 1 00:21:38.757574 kernel: pstore: Registered efi_pstore as persistent store backend Nov 1 00:21:38.757587 kernel: NET: Registered PF_INET6 protocol family Nov 1 00:21:38.757602 kernel: Segment Routing with IPv6 Nov 1 00:21:38.757615 kernel: In-situ OAM (IOAM) with IPv6 Nov 1 00:21:38.757627 kernel: NET: Registered PF_PACKET protocol family Nov 1 00:21:38.757640 kernel: Key type dns_resolver registered Nov 1 00:21:38.757653 kernel: IPI shorthand broadcast: enabled Nov 1 00:21:38.757665 kernel: sched_clock: Marking stable (1887005115, 400218586)->(2386828365, -99604664) Nov 1 00:21:38.757678 kernel: registered taskstats version 1 Nov 1 00:21:38.757694 kernel: Loading compiled-in X.509 certificates Nov 1 00:21:38.757707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 82c585ed20587b8c5c20a8f7d03f29967775c2e4' Nov 1 00:21:38.757719 kernel: Demotion targets for Node 0: null Nov 1 00:21:38.757730 kernel: Key type .fscrypt registered Nov 1 00:21:38.757742 kernel: Key type fscrypt-provisioning registered Nov 1 00:21:38.757754 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 1 00:21:38.757766 kernel: ima: Allocated hash algorithm: sha1 Nov 1 00:21:38.757778 kernel: ima: No architecture policies found Nov 1 00:21:38.757793 kernel: clk: Disabling unused clocks Nov 1 00:21:38.757805 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 1 00:21:38.757817 kernel: Write protecting the kernel read-only data: 40960k Nov 1 00:21:38.757829 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 1 00:21:38.757841 kernel: Run /init as init process Nov 1 00:21:38.757854 kernel: with arguments: Nov 1 00:21:38.757866 kernel: /init Nov 1 00:21:38.757881 kernel: with environment: Nov 1 00:21:38.757893 kernel: HOME=/ Nov 1 00:21:38.757904 kernel: TERM=linux Nov 1 00:21:38.757916 kernel: SCSI subsystem initialized Nov 1 00:21:38.757941 kernel: libata version 3.00 loaded. Nov 1 00:21:38.758211 kernel: ahci 0000:00:1f.2: version 3.0 Nov 1 00:21:38.758232 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 1 00:21:38.758491 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 1 00:21:38.758761 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 1 00:21:38.758995 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 1 00:21:38.759284 kernel: scsi host0: ahci Nov 1 00:21:38.759551 kernel: scsi host1: ahci Nov 1 00:21:38.759820 kernel: scsi host2: ahci Nov 1 00:21:38.760121 kernel: scsi host3: ahci Nov 1 00:21:38.760394 kernel: scsi host4: ahci Nov 1 00:21:38.760658 kernel: scsi host5: ahci Nov 1 00:21:38.760679 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 1 00:21:38.760692 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 1 00:21:38.760711 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 1 00:21:38.760723 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 1 00:21:38.760736 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 1 00:21:38.760748 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 1 00:21:38.760760 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:38.760773 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:38.760785 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:38.760801 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:38.760813 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 1 00:21:38.760825 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 1 00:21:38.760837 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 00:21:38.760849 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 1 00:21:38.760865 kernel: ata3.00: applying bridge limits Nov 1 00:21:38.760878 kernel: ata3.00: LPM support broken, forcing max_power Nov 1 00:21:38.760892 kernel: ata3.00: configured for UDMA/100 Nov 1 00:21:38.761170 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 1 00:21:38.761517 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 1 00:21:38.761739 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 1 00:21:38.761757 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 1 00:21:38.761769 kernel: GPT:16515071 != 27000831 Nov 1 00:21:38.761788 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 1 00:21:38.761800 kernel: GPT:16515071 != 27000831 Nov 1 00:21:38.761811 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 1 00:21:38.761824 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 1 00:21:38.762061 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 1 00:21:38.762097 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 1 00:21:38.762341 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 1 00:21:38.762376 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 1 00:21:38.762388 kernel: device-mapper: uevent: version 1.0.3 Nov 1 00:21:38.762401 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 1 00:21:38.762413 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 1 00:21:38.762425 kernel: raid6: avx2x4 gen() 22193 MB/s Nov 1 00:21:38.762437 kernel: raid6: avx2x2 gen() 27725 MB/s Nov 1 00:21:38.762449 kernel: raid6: avx2x1 gen() 23484 MB/s Nov 1 00:21:38.762465 kernel: raid6: using algorithm avx2x2 gen() 27725 MB/s Nov 1 00:21:38.762477 kernel: raid6: .... xor() 18923 MB/s, rmw enabled Nov 1 00:21:38.762489 kernel: raid6: using avx2x2 recovery algorithm Nov 1 00:21:38.762501 kernel: xor: automatically using best checksumming function avx Nov 1 00:21:38.762513 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 1 00:21:38.762526 kernel: BTRFS: device fsid 95d044e5-fb6f-4378-956f-63399a32528b devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (181) Nov 1 00:21:38.762538 kernel: BTRFS info (device dm-0): first mount of filesystem 95d044e5-fb6f-4378-956f-63399a32528b Nov 1 00:21:38.762554 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:38.762566 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 1 00:21:38.762578 kernel: BTRFS info (device dm-0): enabling free space tree Nov 1 00:21:38.762590 kernel: loop: module loaded Nov 1 00:21:38.762602 kernel: loop0: detected capacity change from 0 to 100120 Nov 1 00:21:38.762616 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 1 00:21:38.762631 systemd[1]: Successfully made /usr/ read-only. Nov 1 00:21:38.762652 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 00:21:38.762666 systemd[1]: Detected virtualization kvm. Nov 1 00:21:38.762678 systemd[1]: Detected architecture x86-64. Nov 1 00:21:38.762691 systemd[1]: Running in initrd. Nov 1 00:21:38.762703 systemd[1]: No hostname configured, using default hostname. Nov 1 00:21:38.762719 systemd[1]: Hostname set to . Nov 1 00:21:38.762731 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 00:21:38.762744 systemd[1]: Queued start job for default target initrd.target. Nov 1 00:21:38.762757 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:21:38.762769 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:38.762782 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:38.762796 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 1 00:21:38.762812 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:38.762826 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 1 00:21:38.762842 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 1 00:21:38.762855 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:38.762868 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:38.762881 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 1 00:21:38.762898 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:38.762910 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:38.762923 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:38.762936 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:38.762948 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:38.762961 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:38.762973 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 1 00:21:38.762990 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 1 00:21:38.763003 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:38.763015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:38.763028 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:38.763040 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:38.763054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 1 00:21:38.763091 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 1 00:21:38.763104 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:38.763116 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 1 00:21:38.763130 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 1 00:21:38.763143 systemd[1]: Starting systemd-fsck-usr.service... Nov 1 00:21:38.763155 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:38.763168 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:38.763183 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:38.763197 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:38.763209 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:38.763222 systemd[1]: Finished systemd-fsck-usr.service. Nov 1 00:21:38.763237 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:38.763290 systemd-journald[317]: Collecting audit messages is disabled. Nov 1 00:21:38.763320 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:38.763336 systemd-journald[317]: Journal started Nov 1 00:21:38.763372 systemd-journald[317]: Runtime Journal (/run/log/journal/de52f4171d2c4ad498a9b27a541f1dd0) is 6M, max 48.1M, 42.1M free. Nov 1 00:21:38.767054 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:38.771112 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:38.784430 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 1 00:21:38.788152 kernel: Bridge firewalling registered Nov 1 00:21:38.788213 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 1 00:21:38.789320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:38.791112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:38.799661 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:38.813228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 1 00:21:38.823241 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:38.827580 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:38.840598 systemd-tmpfiles[337]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 1 00:21:38.844016 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:38.847473 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:38.850316 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 1 00:21:38.857324 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:38.863690 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:38.884599 dracut-cmdline[357]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06aebf6c20a38bc11b85661c7362dc459d93d17de8abe6e1c0606dc6af554184 Nov 1 00:21:38.928371 systemd-resolved[358]: Positive Trust Anchors: Nov 1 00:21:38.928392 systemd-resolved[358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:38.928398 systemd-resolved[358]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 00:21:38.928439 systemd-resolved[358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:38.958467 systemd-resolved[358]: Defaulting to hostname 'linux'. Nov 1 00:21:38.960204 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:38.963959 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:39.065132 kernel: Loading iSCSI transport class v2.0-870. Nov 1 00:21:39.086126 kernel: iscsi: registered transport (tcp) Nov 1 00:21:39.111536 kernel: iscsi: registered transport (qla4xxx) Nov 1 00:21:39.111614 kernel: QLogic iSCSI HBA Driver Nov 1 00:21:39.143657 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:21:39.185821 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:39.191811 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:21:39.270250 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:39.274318 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 1 00:21:39.277450 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 1 00:21:39.326791 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:39.337093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:39.370057 systemd-udevd[597]: Using default interface naming scheme 'v257'. Nov 1 00:21:39.386696 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:39.389300 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 1 00:21:39.421438 dracut-pre-trigger[641]: rd.md=0: removing MD RAID activation Nov 1 00:21:39.452290 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:39.460901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:39.479498 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:39.485300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:39.531243 systemd-networkd[729]: lo: Link UP Nov 1 00:21:39.531257 systemd-networkd[729]: lo: Gained carrier Nov 1 00:21:39.534883 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:39.538620 systemd[1]: Reached target network.target - Network. Nov 1 00:21:39.590419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:39.596846 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 1 00:21:39.664677 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 1 00:21:39.677902 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 1 00:21:39.696216 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 1 00:21:39.708108 kernel: cryptd: max_cpu_qlen set to 1000 Nov 1 00:21:39.717122 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:21:39.723687 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 1 00:21:39.740693 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:39.741766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:39.747175 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:39.762211 kernel: AES CTR mode by8 optimization enabled Nov 1 00:21:39.763002 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:39.782398 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Nov 1 00:21:39.784352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:39.785030 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:39.786301 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:21:39.786306 systemd-networkd[729]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:39.787838 systemd-networkd[729]: eth0: Link UP Nov 1 00:21:39.788100 systemd-networkd[729]: eth0: Gained carrier Nov 1 00:21:39.788110 systemd-networkd[729]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:21:39.789639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:39.819197 systemd-networkd[729]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:21:39.833979 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:40.042146 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:40.043500 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:40.046964 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:40.051617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:40.056642 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 1 00:21:40.066341 disk-uuid[774]: Primary Header is updated. Nov 1 00:21:40.066341 disk-uuid[774]: Secondary Entries is updated. Nov 1 00:21:40.066341 disk-uuid[774]: Secondary Header is updated. Nov 1 00:21:40.113374 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:40.128903 systemd-resolved[358]: Detected conflict on linux IN A 10.0.0.116 Nov 1 00:21:40.128918 systemd-resolved[358]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Nov 1 00:21:41.117287 disk-uuid[848]: Warning: The kernel is still using the old partition table. Nov 1 00:21:41.117287 disk-uuid[848]: The new table will be used at the next reboot or after you Nov 1 00:21:41.117287 disk-uuid[848]: run partprobe(8) or kpartx(8) Nov 1 00:21:41.117287 disk-uuid[848]: The operation has completed successfully. Nov 1 00:21:41.129878 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 1 00:21:41.130108 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 1 00:21:41.135394 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 1 00:21:41.184103 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (865) Nov 1 00:21:41.188156 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:21:41.188211 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:41.193468 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:21:41.193531 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:21:41.203123 kernel: BTRFS info (device vda6): last unmount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:21:41.204776 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 1 00:21:41.209935 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 1 00:21:41.351501 ignition[884]: Ignition 2.22.0 Nov 1 00:21:41.351515 ignition[884]: Stage: fetch-offline Nov 1 00:21:41.351571 ignition[884]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:41.351587 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:41.351705 ignition[884]: parsed url from cmdline: "" Nov 1 00:21:41.351710 ignition[884]: no config URL provided Nov 1 00:21:41.351717 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Nov 1 00:21:41.351737 ignition[884]: no config at "/usr/lib/ignition/user.ign" Nov 1 00:21:41.351793 ignition[884]: op(1): [started] loading QEMU firmware config module Nov 1 00:21:41.351800 ignition[884]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 1 00:21:41.368922 ignition[884]: op(1): [finished] loading QEMU firmware config module Nov 1 00:21:41.407384 systemd-networkd[729]: eth0: Gained IPv6LL Nov 1 00:21:41.462103 ignition[884]: parsing config with SHA512: f10b222b13a91bf180c5f386c6cd0a42b4e8fb23fbf2f360da8c918e0d9f002501fcfefe5a490f3126d8695598b82157261e32a80173fd70fbca4e29e4b27269 Nov 1 00:21:41.469802 unknown[884]: fetched base config from "system" Nov 1 00:21:41.469822 unknown[884]: fetched user config from "qemu" Nov 1 00:21:41.470401 ignition[884]: fetch-offline: fetch-offline passed Nov 1 00:21:41.470491 ignition[884]: Ignition finished successfully Nov 1 00:21:41.476227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:41.481218 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 1 00:21:41.485570 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 1 00:21:41.532472 ignition[894]: Ignition 2.22.0 Nov 1 00:21:41.532485 ignition[894]: Stage: kargs Nov 1 00:21:41.532641 ignition[894]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:41.532651 ignition[894]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:41.534790 ignition[894]: kargs: kargs passed Nov 1 00:21:41.534861 ignition[894]: Ignition finished successfully Nov 1 00:21:41.541381 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 1 00:21:41.544710 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 1 00:21:41.588319 ignition[902]: Ignition 2.22.0 Nov 1 00:21:41.588332 ignition[902]: Stage: disks Nov 1 00:21:41.588480 ignition[902]: no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:41.588490 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:41.589200 ignition[902]: disks: disks passed Nov 1 00:21:41.589262 ignition[902]: Ignition finished successfully Nov 1 00:21:41.596421 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 1 00:21:41.600525 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:41.604045 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 1 00:21:41.604821 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:41.609047 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:41.609681 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:41.611702 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 1 00:21:41.656147 systemd-fsck[912]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 1 00:21:41.665363 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 1 00:21:41.670587 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 1 00:21:41.797116 kernel: EXT4-fs (vda9): mounted filesystem 64a17da1-5670-45af-8ec7-07540a245d0c r/w with ordered data mode. Quota mode: none. Nov 1 00:21:41.798212 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 1 00:21:41.803360 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 1 00:21:41.808930 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:41.813035 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 1 00:21:41.815191 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 1 00:21:41.815240 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 1 00:21:41.815271 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:41.840216 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 1 00:21:41.844256 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 1 00:21:41.856094 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (920) Nov 1 00:21:41.856152 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:21:41.856169 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:41.856184 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:21:41.856200 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:21:41.858192 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:41.929141 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Nov 1 00:21:41.935765 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Nov 1 00:21:41.943376 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Nov 1 00:21:41.949970 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Nov 1 00:21:42.083726 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:42.087753 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 1 00:21:42.090904 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 1 00:21:42.122159 kernel: BTRFS info (device vda6): last unmount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:21:42.142453 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 1 00:21:42.161775 ignition[1033]: INFO : Ignition 2.22.0 Nov 1 00:21:42.161775 ignition[1033]: INFO : Stage: mount Nov 1 00:21:42.164827 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:42.164827 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:42.164827 ignition[1033]: INFO : mount: mount passed Nov 1 00:21:42.164827 ignition[1033]: INFO : Ignition finished successfully Nov 1 00:21:42.171290 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 1 00:21:42.176725 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 1 00:21:42.178787 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 1 00:21:42.203626 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 1 00:21:42.232120 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1046) Nov 1 00:21:42.235841 kernel: BTRFS info (device vda6): first mount of filesystem c2b94f7b-240a-42e5-82e9-13bc01b64bda Nov 1 00:21:42.235889 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 1 00:21:42.240681 kernel: BTRFS info (device vda6): turning on async discard Nov 1 00:21:42.240715 kernel: BTRFS info (device vda6): enabling free space tree Nov 1 00:21:42.242852 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 1 00:21:42.287399 ignition[1063]: INFO : Ignition 2.22.0 Nov 1 00:21:42.287399 ignition[1063]: INFO : Stage: files Nov 1 00:21:42.290266 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:42.290266 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:42.294449 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Nov 1 00:21:42.298613 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 1 00:21:42.298613 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 1 00:21:42.304480 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 1 00:21:42.307026 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 1 00:21:42.307026 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 1 00:21:42.305292 unknown[1063]: wrote ssh authorized keys file for user: core Nov 1 00:21:42.313943 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:42.313943 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Nov 1 00:21:42.351610 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 1 00:21:42.426412 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Nov 1 00:21:42.426412 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 1 00:21:42.434368 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:42.470515 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:42.470515 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:42.470515 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Nov 1 00:21:42.859544 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 1 00:21:43.271679 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Nov 1 00:21:43.271679 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 1 00:21:43.279611 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 1 00:21:43.306334 ignition[1063]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 1 00:21:43.345520 ignition[1063]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:21:43.353376 ignition[1063]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 1 00:21:43.356578 ignition[1063]: INFO : files: files passed Nov 1 00:21:43.356578 ignition[1063]: INFO : Ignition finished successfully Nov 1 00:21:43.363854 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 1 00:21:43.367955 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 1 00:21:43.387800 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 1 00:21:43.394097 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 1 00:21:43.394281 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 1 00:21:43.409490 initrd-setup-root-after-ignition[1094]: grep: /sysroot/oem/oem-release: No such file or directory Nov 1 00:21:43.414930 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:43.414930 initrd-setup-root-after-ignition[1096]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:43.420976 initrd-setup-root-after-ignition[1100]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 1 00:21:43.419262 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:43.421993 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 1 00:21:43.430085 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 1 00:21:43.505726 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 1 00:21:43.505938 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 1 00:21:43.507295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 1 00:21:43.507860 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 1 00:21:43.509103 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 1 00:21:43.510621 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 1 00:21:43.544058 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:43.546543 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 1 00:21:43.573481 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 1 00:21:43.573704 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:43.575132 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:43.584415 systemd[1]: Stopped target timers.target - Timer Units. Nov 1 00:21:43.585588 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 1 00:21:43.585806 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 1 00:21:43.591822 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 1 00:21:43.595797 systemd[1]: Stopped target basic.target - Basic System. Nov 1 00:21:43.599259 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 1 00:21:43.600816 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 1 00:21:43.606792 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 1 00:21:43.608087 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 1 00:21:43.614675 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 1 00:21:43.616111 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 1 00:21:43.622308 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 1 00:21:43.623852 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 1 00:21:43.624778 systemd[1]: Stopped target swap.target - Swaps. Nov 1 00:21:43.631838 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 1 00:21:43.632151 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 1 00:21:43.635092 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:43.639215 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:43.640038 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 1 00:21:43.640288 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:43.640998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 1 00:21:43.641283 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 1 00:21:43.653265 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 1 00:21:43.653527 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 1 00:21:43.654974 systemd[1]: Stopped target paths.target - Path Units. Nov 1 00:21:43.659651 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 1 00:21:43.665215 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:43.666775 systemd[1]: Stopped target slices.target - Slice Units. Nov 1 00:21:43.667602 systemd[1]: Stopped target sockets.target - Socket Units. Nov 1 00:21:43.668215 systemd[1]: iscsid.socket: Deactivated successfully. Nov 1 00:21:43.668361 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 1 00:21:43.668798 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 1 00:21:43.668906 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 1 00:21:43.669736 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 1 00:21:43.669896 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 1 00:21:43.684047 systemd[1]: ignition-files.service: Deactivated successfully. Nov 1 00:21:43.685207 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 1 00:21:43.692304 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 1 00:21:43.696698 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 1 00:21:43.699928 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 1 00:21:43.700159 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:43.701041 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 1 00:21:43.701203 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:43.707865 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 1 00:21:43.708015 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 1 00:21:43.730572 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 1 00:21:43.730735 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 1 00:21:43.746354 ignition[1120]: INFO : Ignition 2.22.0 Nov 1 00:21:43.746354 ignition[1120]: INFO : Stage: umount Nov 1 00:21:43.752195 ignition[1120]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 1 00:21:43.752195 ignition[1120]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 1 00:21:43.752195 ignition[1120]: INFO : umount: umount passed Nov 1 00:21:43.752195 ignition[1120]: INFO : Ignition finished successfully Nov 1 00:21:43.758940 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 1 00:21:43.759177 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 1 00:21:43.761350 systemd[1]: Stopped target network.target - Network. Nov 1 00:21:43.762140 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 1 00:21:43.762214 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 1 00:21:43.762706 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 1 00:21:43.762768 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 1 00:21:43.770150 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 1 00:21:43.770272 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 1 00:21:43.771752 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 1 00:21:43.771808 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 1 00:21:43.772678 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 1 00:21:43.795248 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 1 00:21:43.796947 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 1 00:21:43.812587 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 1 00:21:43.812806 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 1 00:21:43.819589 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 1 00:21:43.820907 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 1 00:21:43.820972 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:43.824030 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 1 00:21:43.829400 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 1 00:21:43.829511 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 1 00:21:43.830088 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:43.833134 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 1 00:21:43.846673 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 1 00:21:43.856580 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 1 00:21:43.856818 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:43.859900 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 1 00:21:43.859977 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:43.863839 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 1 00:21:43.863949 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:43.865945 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 1 00:21:43.866013 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 1 00:21:43.867349 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 1 00:21:43.867446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 1 00:21:43.880827 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 1 00:21:43.880973 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 1 00:21:43.884862 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 1 00:21:43.889038 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 1 00:21:43.889190 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:43.889984 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 1 00:21:43.890041 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:43.897818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 1 00:21:43.897919 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:43.898996 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 1 00:21:43.899097 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:43.899835 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 1 00:21:43.899908 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:43.900694 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 1 00:21:43.900751 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:43.901666 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:43.901720 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:43.903495 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 1 00:21:43.925508 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 1 00:21:43.928669 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 1 00:21:43.928799 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 1 00:21:43.944217 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 1 00:21:43.944380 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 1 00:21:43.958584 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 1 00:21:43.958757 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 1 00:21:43.960329 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 1 00:21:43.969613 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 1 00:21:44.004701 systemd[1]: Switching root. Nov 1 00:21:44.048708 systemd-journald[317]: Journal stopped Nov 1 00:21:46.125970 systemd-journald[317]: Received SIGTERM from PID 1 (systemd). Nov 1 00:21:46.126049 kernel: SELinux: policy capability network_peer_controls=1 Nov 1 00:21:46.126088 kernel: SELinux: policy capability open_perms=1 Nov 1 00:21:46.126106 kernel: SELinux: policy capability extended_socket_class=1 Nov 1 00:21:46.126124 kernel: SELinux: policy capability always_check_network=0 Nov 1 00:21:46.126144 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 1 00:21:46.126166 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 1 00:21:46.126209 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 1 00:21:46.126241 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 1 00:21:46.126268 kernel: SELinux: policy capability userspace_initial_context=0 Nov 1 00:21:46.126298 systemd[1]: Successfully loaded SELinux policy in 76.060ms. Nov 1 00:21:46.126349 kernel: audit: type=1403 audit(1761956504.800:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 1 00:21:46.126371 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.468ms. Nov 1 00:21:46.126392 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 1 00:21:46.126419 systemd[1]: Detected virtualization kvm. Nov 1 00:21:46.126458 systemd[1]: Detected architecture x86-64. Nov 1 00:21:46.126516 systemd[1]: Detected first boot. Nov 1 00:21:46.126571 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 1 00:21:46.126611 zram_generator::config[1167]: No configuration found. Nov 1 00:21:46.126657 kernel: Guest personality initialized and is inactive Nov 1 00:21:46.126703 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 1 00:21:46.126733 kernel: Initialized host personality Nov 1 00:21:46.126776 kernel: NET: Registered PF_VSOCK protocol family Nov 1 00:21:46.126817 systemd[1]: Populated /etc with preset unit settings. Nov 1 00:21:46.126850 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 1 00:21:46.126888 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 1 00:21:46.126917 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 1 00:21:46.126964 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 1 00:21:46.127004 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 1 00:21:46.127039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 1 00:21:46.127102 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 1 00:21:46.127135 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 1 00:21:46.127166 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 1 00:21:46.127209 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 1 00:21:46.127240 systemd[1]: Created slice user.slice - User and Session Slice. Nov 1 00:21:46.127269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 1 00:21:46.127320 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 1 00:21:46.127362 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 1 00:21:46.127394 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 1 00:21:46.127424 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 1 00:21:46.127455 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 1 00:21:46.127484 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 1 00:21:46.127516 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 1 00:21:46.127551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 1 00:21:46.127581 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 1 00:21:46.127650 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 1 00:21:46.127683 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 1 00:21:46.127713 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 1 00:21:46.127743 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 1 00:21:46.127773 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 1 00:21:46.127820 systemd[1]: Reached target slices.target - Slice Units. Nov 1 00:21:46.127854 systemd[1]: Reached target swap.target - Swaps. Nov 1 00:21:46.127884 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 1 00:21:46.127915 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 1 00:21:46.127945 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 1 00:21:46.127979 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 1 00:21:46.128010 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 1 00:21:46.128046 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 1 00:21:46.128097 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 1 00:21:46.128125 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 1 00:21:46.128144 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 1 00:21:46.128162 systemd[1]: Mounting media.mount - External Media Directory... Nov 1 00:21:46.128192 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:46.128222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 1 00:21:46.128255 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 1 00:21:46.128286 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 1 00:21:46.128322 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 1 00:21:46.128352 systemd[1]: Reached target machines.target - Containers. Nov 1 00:21:46.128383 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 1 00:21:46.128412 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:46.128442 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 1 00:21:46.128479 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 1 00:21:46.128511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:46.128542 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:46.128560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:46.128575 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 1 00:21:46.128590 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:46.128607 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 1 00:21:46.128627 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 1 00:21:46.128642 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 1 00:21:46.128658 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 1 00:21:46.128673 systemd[1]: Stopped systemd-fsck-usr.service. Nov 1 00:21:46.128689 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:21:46.128706 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 1 00:21:46.128724 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 1 00:21:46.128740 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 1 00:21:46.128755 kernel: ACPI: bus type drm_connector registered Nov 1 00:21:46.128772 kernel: fuse: init (API version 7.41) Nov 1 00:21:46.128787 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 1 00:21:46.128802 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 1 00:21:46.128822 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 1 00:21:46.128841 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:46.128856 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 1 00:21:46.128869 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 1 00:21:46.128881 systemd[1]: Mounted media.mount - External Media Directory. Nov 1 00:21:46.128919 systemd-journald[1245]: Collecting audit messages is disabled. Nov 1 00:21:46.128944 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 1 00:21:46.128957 systemd-journald[1245]: Journal started Nov 1 00:21:46.128979 systemd-journald[1245]: Runtime Journal (/run/log/journal/de52f4171d2c4ad498a9b27a541f1dd0) is 6M, max 48.1M, 42.1M free. Nov 1 00:21:45.712607 systemd[1]: Queued start job for default target multi-user.target. Nov 1 00:21:45.738779 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 1 00:21:45.739684 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 1 00:21:46.132371 systemd[1]: Started systemd-journald.service - Journal Service. Nov 1 00:21:46.135750 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 1 00:21:46.138091 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 1 00:21:46.140608 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 1 00:21:46.143304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 1 00:21:46.146210 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 1 00:21:46.146635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 1 00:21:46.149448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:46.149722 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:46.152398 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:46.152752 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:46.155130 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:46.155396 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:46.158325 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 1 00:21:46.158650 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 1 00:21:46.161818 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:46.162056 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:46.164612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 1 00:21:46.568359 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 1 00:21:46.574291 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 1 00:21:46.580892 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 1 00:21:46.602904 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 1 00:21:46.636301 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 1 00:21:46.639949 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 1 00:21:46.649871 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 1 00:21:46.666268 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 1 00:21:46.672860 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 1 00:21:46.672947 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 1 00:21:46.677479 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 1 00:21:46.685600 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:46.701586 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 1 00:21:46.706055 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 1 00:21:46.706600 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:46.711497 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 1 00:21:46.712045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:46.717547 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 1 00:21:46.733741 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 1 00:21:46.737690 systemd-journald[1245]: Time spent on flushing to /var/log/journal/de52f4171d2c4ad498a9b27a541f1dd0 is 20.969ms for 1056 entries. Nov 1 00:21:46.737690 systemd-journald[1245]: System Journal (/var/log/journal/de52f4171d2c4ad498a9b27a541f1dd0) is 8M, max 163.5M, 155.5M free. Nov 1 00:21:46.806309 systemd-journald[1245]: Received client request to flush runtime journal. Nov 1 00:21:46.806383 kernel: loop1: detected capacity change from 0 to 110984 Nov 1 00:21:46.751442 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 1 00:21:46.756713 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 1 00:21:46.760450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 1 00:21:46.805918 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 1 00:21:46.819213 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 1 00:21:46.823011 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 1 00:21:46.823037 systemd-tmpfiles[1288]: ACLs are not supported, ignoring. Nov 1 00:21:46.823272 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 1 00:21:46.830737 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 1 00:21:46.837235 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 1 00:21:46.841409 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 1 00:21:46.851393 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 1 00:21:46.868111 kernel: loop2: detected capacity change from 0 to 128048 Nov 1 00:21:46.904316 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 1 00:21:46.927635 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 1 00:21:46.935878 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 1 00:21:46.941477 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 1 00:21:46.955213 kernel: loop3: detected capacity change from 0 to 224512 Nov 1 00:21:46.969621 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 1 00:21:46.986116 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 1 00:21:46.986614 systemd-tmpfiles[1307]: ACLs are not supported, ignoring. Nov 1 00:21:47.389223 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 1 00:21:47.816134 kernel: loop4: detected capacity change from 0 to 110984 Nov 1 00:21:47.817790 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 1 00:21:47.836103 kernel: loop5: detected capacity change from 0 to 128048 Nov 1 00:21:47.848100 kernel: loop6: detected capacity change from 0 to 224512 Nov 1 00:21:47.859429 (sd-merge)[1312]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 1 00:21:47.864765 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 1 00:21:47.867177 (sd-merge)[1312]: Merged extensions into '/usr'. Nov 1 00:21:47.876783 systemd[1]: Reload requested from client PID 1287 ('systemd-sysext') (unit systemd-sysext.service)... Nov 1 00:21:47.876804 systemd[1]: Reloading... Nov 1 00:21:47.957110 zram_generator::config[1346]: No configuration found. Nov 1 00:21:48.057041 systemd-resolved[1306]: Positive Trust Anchors: Nov 1 00:21:48.057099 systemd-resolved[1306]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 1 00:21:48.057104 systemd-resolved[1306]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 1 00:21:48.057144 systemd-resolved[1306]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 1 00:21:48.064155 systemd-resolved[1306]: Defaulting to hostname 'linux'. Nov 1 00:21:48.227309 systemd[1]: Reloading finished in 349 ms. Nov 1 00:21:48.265465 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 1 00:21:48.268107 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 1 00:21:48.274944 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 1 00:21:48.298058 systemd[1]: Starting ensure-sysext.service... Nov 1 00:21:48.302034 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 1 00:21:48.325634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 1 00:21:48.329575 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 1 00:21:48.329630 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 1 00:21:48.330087 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 1 00:21:48.330515 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 1 00:21:48.331988 systemd-tmpfiles[1383]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 1 00:21:48.332459 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 1 00:21:48.332571 systemd-tmpfiles[1383]: ACLs are not supported, ignoring. Nov 1 00:21:48.334794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 1 00:21:48.337586 systemd[1]: Reload requested from client PID 1382 ('systemctl') (unit ensure-sysext.service)... Nov 1 00:21:48.337608 systemd[1]: Reloading... Nov 1 00:21:48.340926 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:48.340946 systemd-tmpfiles[1383]: Skipping /boot Nov 1 00:21:48.357410 systemd-tmpfiles[1383]: Detected autofs mount point /boot during canonicalization of boot. Nov 1 00:21:48.357445 systemd-tmpfiles[1383]: Skipping /boot Nov 1 00:21:48.386622 systemd-udevd[1386]: Using default interface naming scheme 'v257'. Nov 1 00:21:48.426304 zram_generator::config[1414]: No configuration found. Nov 1 00:21:48.656321 kernel: mousedev: PS/2 mouse device common for all mice Nov 1 00:21:48.670136 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Nov 1 00:21:48.678102 kernel: ACPI: button: Power Button [PWRF] Nov 1 00:21:48.691628 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 1 00:21:48.691834 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 1 00:21:48.694624 systemd[1]: Reloading finished in 356 ms. Nov 1 00:21:48.705373 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 1 00:21:48.709993 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 1 00:21:48.766716 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 1 00:21:48.769718 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 1 00:21:48.770174 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 1 00:21:48.782428 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:48.786317 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 00:21:48.793093 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 1 00:21:48.796889 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 1 00:21:48.803908 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 1 00:21:48.824965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 1 00:21:48.831434 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 1 00:21:48.884582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 1 00:21:48.898016 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 1 00:21:48.903791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 1 00:21:48.915335 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 1 00:21:48.923619 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 1 00:21:48.936719 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 1 00:21:48.948524 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 1 00:21:48.953093 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 1 00:21:48.955663 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 1 00:21:48.959943 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 1 00:21:48.964414 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 1 00:21:48.968695 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 1 00:21:48.968951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 1 00:21:48.971999 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 1 00:21:48.980920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 1 00:21:48.999947 systemd[1]: Finished ensure-sysext.service. Nov 1 00:21:49.005360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 1 00:21:49.005731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 1 00:21:49.017035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 1 00:21:49.017419 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 1 00:21:49.031174 augenrules[1532]: No rules Nov 1 00:21:49.029530 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 1 00:21:49.034670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:49.037732 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:21:49.041170 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 00:21:49.074472 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 1 00:21:49.407985 kernel: kvm_amd: TSC scaling supported Nov 1 00:21:49.408053 kernel: kvm_amd: Nested Virtualization enabled Nov 1 00:21:49.408110 kernel: kvm_amd: Nested Paging enabled Nov 1 00:21:49.408166 kernel: kvm_amd: LBR virtualization supported Nov 1 00:21:49.408190 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 1 00:21:49.406342 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 1 00:21:49.412358 kernel: kvm_amd: Virtual GIF supported Nov 1 00:21:49.418754 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 1 00:21:49.422041 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 1 00:21:49.424967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 1 00:21:49.425597 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:49.440457 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 1 00:21:49.442502 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 1 00:21:49.485112 kernel: EDAC MC: Ver: 3.0.0 Nov 1 00:21:49.505129 systemd-networkd[1518]: lo: Link UP Nov 1 00:21:49.505140 systemd-networkd[1518]: lo: Gained carrier Nov 1 00:21:49.508334 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:21:49.508474 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 1 00:21:49.511794 systemd-networkd[1518]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 1 00:21:49.512679 systemd[1]: Reached target network.target - Network. Nov 1 00:21:49.517303 systemd-networkd[1518]: eth0: Link UP Nov 1 00:21:49.517521 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 1 00:21:49.520911 systemd-networkd[1518]: eth0: Gained carrier Nov 1 00:21:49.523660 systemd-networkd[1518]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 1 00:21:49.524707 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 1 00:21:49.540199 systemd-networkd[1518]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 1 00:21:50.153767 systemd-timesyncd[1531]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 1 00:21:50.154863 systemd-timesyncd[1531]: Initial clock synchronization to Sat 2025-11-01 00:21:50.152805 UTC. Nov 1 00:21:50.157516 systemd-resolved[1306]: Clock change detected. Flushing caches. Nov 1 00:21:50.160579 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 1 00:21:50.162602 systemd[1]: Reached target time-set.target - System Time Set. Nov 1 00:21:50.337398 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 1 00:21:50.358322 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 1 00:21:50.714814 ldconfig[1495]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 1 00:21:50.724813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 1 00:21:50.729774 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 1 00:21:50.778875 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 1 00:21:50.781783 systemd[1]: Reached target sysinit.target - System Initialization. Nov 1 00:21:50.784108 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 1 00:21:50.786386 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 1 00:21:50.788591 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 1 00:21:50.790908 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 1 00:21:50.793140 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 1 00:21:50.795450 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 1 00:21:50.797705 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 1 00:21:50.797749 systemd[1]: Reached target paths.target - Path Units. Nov 1 00:21:50.799336 systemd[1]: Reached target timers.target - Timer Units. Nov 1 00:21:50.803957 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 1 00:21:50.809336 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 1 00:21:50.814892 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 1 00:21:50.817672 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 1 00:21:50.820310 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 1 00:21:50.829725 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 1 00:21:50.832145 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 1 00:21:50.835211 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 1 00:21:50.838148 systemd[1]: Reached target sockets.target - Socket Units. Nov 1 00:21:50.839902 systemd[1]: Reached target basic.target - Basic System. Nov 1 00:21:50.841585 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:50.841626 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 1 00:21:50.843551 systemd[1]: Starting containerd.service - containerd container runtime... Nov 1 00:21:50.847290 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 1 00:21:50.850785 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 1 00:21:50.859784 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 1 00:21:50.863811 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 1 00:21:50.866427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 1 00:21:50.868777 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 1 00:21:50.873637 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 1 00:21:50.877999 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 1 00:21:50.883106 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 1 00:21:50.887377 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 1 00:21:50.901189 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 1 00:21:50.903153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 1 00:21:50.903788 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 1 00:21:50.906174 systemd[1]: Starting update-engine.service - Update Engine... Nov 1 00:21:50.908907 jq[1568]: false Nov 1 00:21:50.909185 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 1 00:21:50.909033 oslogin_cache_refresh[1570]: Refreshing passwd entry cache Nov 1 00:21:50.912121 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 1 00:21:50.919333 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 1 00:21:50.923183 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 1 00:21:50.923542 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 1 00:21:50.927037 extend-filesystems[1569]: Found /dev/vda6 Nov 1 00:21:50.927018 oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 1 00:21:50.928550 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting users, quitting Nov 1 00:21:50.928550 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 00:21:50.928550 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 1 00:21:50.927064 oslogin_cache_refresh[1570]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 1 00:21:50.927166 oslogin_cache_refresh[1570]: Refreshing group entry cache Nov 1 00:21:50.929600 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 1 00:21:50.929862 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 1 00:21:50.939108 jq[1577]: true Nov 1 00:21:50.938304 oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 1 00:21:50.939701 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Failure getting groups, quitting Nov 1 00:21:50.939701 google_oslogin_nss_cache[1570]: oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 00:21:50.938318 oslogin_cache_refresh[1570]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 1 00:21:50.943785 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 1 00:21:50.945064 extend-filesystems[1569]: Found /dev/vda9 Nov 1 00:21:50.944114 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 1 00:21:50.948027 systemd[1]: motdgen.service: Deactivated successfully. Nov 1 00:21:50.948311 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 1 00:21:50.971402 extend-filesystems[1569]: Checking size of /dev/vda9 Nov 1 00:21:50.977197 tar[1586]: linux-amd64/LICENSE Nov 1 00:21:50.977735 tar[1586]: linux-amd64/helm Nov 1 00:21:50.981874 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 1 00:21:50.986096 jq[1597]: true Nov 1 00:21:51.000419 update_engine[1576]: I20251101 00:21:50.999751 1576 main.cc:92] Flatcar Update Engine starting Nov 1 00:21:51.013228 extend-filesystems[1569]: Resized partition /dev/vda9 Nov 1 00:21:51.018180 extend-filesystems[1617]: resize2fs 1.47.3 (8-Jul-2025) Nov 1 00:21:51.028881 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 1 00:21:51.163353 dbus-daemon[1566]: [system] SELinux support is enabled Nov 1 00:21:51.163723 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 1 00:21:51.168613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 1 00:21:51.173513 update_engine[1576]: I20251101 00:21:51.166358 1576 update_check_scheduler.cc:74] Next update check in 8m32s Nov 1 00:21:51.168640 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 1 00:21:51.174263 systemd-logind[1575]: Watching system buttons on /dev/input/event2 (Power Button) Nov 1 00:21:51.174291 systemd-logind[1575]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 1 00:21:51.175299 systemd-logind[1575]: New seat seat0. Nov 1 00:21:51.177574 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 1 00:21:51.177612 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 1 00:21:51.182031 systemd[1]: Started systemd-logind.service - User Login Management. Nov 1 00:21:51.187636 systemd[1]: Started update-engine.service - Update Engine. Nov 1 00:21:51.230185 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 1 00:21:51.188022 dbus-daemon[1566]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 1 00:21:51.194463 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 1 00:21:51.231405 extend-filesystems[1617]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 1 00:21:51.231405 extend-filesystems[1617]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 1 00:21:51.231405 extend-filesystems[1617]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 1 00:21:51.248352 extend-filesystems[1569]: Resized filesystem in /dev/vda9 Nov 1 00:21:51.234246 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 1 00:21:51.234680 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 1 00:21:51.253083 bash[1633]: Updated "/home/core/.ssh/authorized_keys" Nov 1 00:21:51.254219 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 1 00:21:51.259699 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 1 00:21:51.325201 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 1 00:21:51.345449 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 1 00:21:51.424075 sshd_keygen[1592]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 1 00:21:51.477840 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 1 00:21:51.484058 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 1 00:21:51.556660 systemd-networkd[1518]: eth0: Gained IPv6LL Nov 1 00:21:51.560648 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:45396.service - OpenSSH per-connection server daemon (10.0.0.1:45396). Nov 1 00:21:51.626521 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 1 00:21:51.640064 systemd[1]: Reached target network-online.target - Network is Online. Nov 1 00:21:51.644175 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 1 00:21:51.650196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:21:51.661823 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 1 00:21:51.664986 systemd[1]: issuegen.service: Deactivated successfully. Nov 1 00:21:51.667749 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 1 00:21:51.696546 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 1 00:21:51.753124 containerd[1598]: time="2025-11-01T00:21:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 1 00:21:51.753124 containerd[1598]: time="2025-11-01T00:21:51.727301420Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 1 00:21:51.768611 containerd[1598]: time="2025-11-01T00:21:51.768468623Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="23.485µs" Nov 1 00:21:51.768843 containerd[1598]: time="2025-11-01T00:21:51.768812999Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 1 00:21:51.769128 containerd[1598]: time="2025-11-01T00:21:51.769003186Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 1 00:21:51.770135 containerd[1598]: time="2025-11-01T00:21:51.770088551Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 1 00:21:51.770326 containerd[1598]: time="2025-11-01T00:21:51.770263069Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 1 00:21:51.770540 containerd[1598]: time="2025-11-01T00:21:51.770503700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 00:21:51.770813 containerd[1598]: time="2025-11-01T00:21:51.770770661Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 1 00:21:51.770912 containerd[1598]: time="2025-11-01T00:21:51.770881969Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 00:21:51.771533 containerd[1598]: time="2025-11-01T00:21:51.771501311Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 1 00:21:51.771615 containerd[1598]: time="2025-11-01T00:21:51.771595828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 00:21:51.771692 containerd[1598]: time="2025-11-01T00:21:51.771671660Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 1 00:21:51.771878 containerd[1598]: time="2025-11-01T00:21:51.771853121Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 1 00:21:51.772154 containerd[1598]: time="2025-11-01T00:21:51.772114881Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 1 00:21:51.772684 containerd[1598]: time="2025-11-01T00:21:51.772656928Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 00:21:51.772805 containerd[1598]: time="2025-11-01T00:21:51.772779899Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 1 00:21:51.772881 containerd[1598]: time="2025-11-01T00:21:51.772861973Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 1 00:21:51.773040 containerd[1598]: time="2025-11-01T00:21:51.773014529Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 1 00:21:51.773492 containerd[1598]: time="2025-11-01T00:21:51.773463350Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 1 00:21:51.773660 containerd[1598]: time="2025-11-01T00:21:51.773636715Z" level=info msg="metadata content store policy set" policy=shared Nov 1 00:21:51.797370 containerd[1598]: time="2025-11-01T00:21:51.796343439Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 1 00:21:51.797370 containerd[1598]: time="2025-11-01T00:21:51.796489142Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 1 00:21:51.797370 containerd[1598]: time="2025-11-01T00:21:51.796560225Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 1 00:21:51.797370 containerd[1598]: time="2025-11-01T00:21:51.796674510Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 1 00:21:51.797370 containerd[1598]: time="2025-11-01T00:21:51.796746985Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797609202Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797703369Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797732864Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797806142Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797829656Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797870973Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 1 00:21:51.797946 containerd[1598]: time="2025-11-01T00:21:51.797899116Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 1 00:21:51.798658 containerd[1598]: time="2025-11-01T00:21:51.798616411Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 1 00:21:51.798982 containerd[1598]: time="2025-11-01T00:21:51.798953944Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 1 00:21:51.799189 containerd[1598]: time="2025-11-01T00:21:51.799103555Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 1 00:21:51.799322 containerd[1598]: time="2025-11-01T00:21:51.799299272Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 1 00:21:51.799444 containerd[1598]: time="2025-11-01T00:21:51.799407936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 1 00:21:51.799585 containerd[1598]: time="2025-11-01T00:21:51.799561303Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 1 00:21:51.799696 containerd[1598]: time="2025-11-01T00:21:51.799673965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 1 00:21:51.799890 containerd[1598]: time="2025-11-01T00:21:51.799827342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 1 00:21:51.800145 containerd[1598]: time="2025-11-01T00:21:51.800096547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 1 00:21:51.801575 containerd[1598]: time="2025-11-01T00:21:51.800292254Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 1 00:21:51.801575 containerd[1598]: time="2025-11-01T00:21:51.801450216Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 1 00:21:51.801970 containerd[1598]: time="2025-11-01T00:21:51.801770787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 1 00:21:51.801970 containerd[1598]: time="2025-11-01T00:21:51.801809439Z" level=info msg="Start snapshots syncer" Nov 1 00:21:51.801992 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 1 00:21:51.802302 containerd[1598]: time="2025-11-01T00:21:51.802237482Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 1 00:21:51.803031 containerd[1598]: time="2025-11-01T00:21:51.802959616Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 1 00:21:51.804219 containerd[1598]: time="2025-11-01T00:21:51.803728909Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 1 00:21:51.804431 containerd[1598]: time="2025-11-01T00:21:51.804403144Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804716491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804755494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804772546Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804792624Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804808985Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804824263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804844070Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 1 00:21:51.804955 containerd[1598]: time="2025-11-01T00:21:51.804901248Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 1 00:21:51.805262 containerd[1598]: time="2025-11-01T00:21:51.805235404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 1 00:21:51.805361 containerd[1598]: time="2025-11-01T00:21:51.805337706Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 1 00:21:51.805510 containerd[1598]: time="2025-11-01T00:21:51.805482838Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 00:21:51.805606 containerd[1598]: time="2025-11-01T00:21:51.805582145Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 1 00:21:51.805681 containerd[1598]: time="2025-11-01T00:21:51.805660512Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 00:21:51.805759 containerd[1598]: time="2025-11-01T00:21:51.805739079Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 1 00:21:51.805943 containerd[1598]: time="2025-11-01T00:21:51.805809511Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 1 00:21:51.805943 containerd[1598]: time="2025-11-01T00:21:51.805853273Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 1 00:21:51.805943 containerd[1598]: time="2025-11-01T00:21:51.805879182Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 1 00:21:51.806218 containerd[1598]: time="2025-11-01T00:21:51.806195074Z" level=info msg="runtime interface created" Nov 1 00:21:51.806292 containerd[1598]: time="2025-11-01T00:21:51.806274002Z" level=info msg="created NRI interface" Nov 1 00:21:51.806367 containerd[1598]: time="2025-11-01T00:21:51.806346698Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 1 00:21:51.806452 containerd[1598]: time="2025-11-01T00:21:51.806432680Z" level=info msg="Connect containerd service" Nov 1 00:21:51.806586 containerd[1598]: time="2025-11-01T00:21:51.806561641Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 1 00:21:51.808951 containerd[1598]: time="2025-11-01T00:21:51.808884678Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 1 00:21:51.809196 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 1 00:21:51.816855 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 1 00:21:51.831594 systemd[1]: Reached target getty.target - Login Prompts. Nov 1 00:21:51.834477 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 1 00:21:51.835280 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 1 00:21:51.838465 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 1 00:21:51.855661 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 1 00:21:51.892335 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 45396 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:21:51.894134 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:51.917450 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 1 00:21:51.929566 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 1 00:21:51.935462 systemd-logind[1575]: New session 1 of user core. Nov 1 00:21:51.974706 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 1 00:21:51.984776 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 1 00:21:52.221527 (systemd)[1698]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 1 00:21:52.225704 systemd-logind[1575]: New session c1 of user core. Nov 1 00:21:52.250097 containerd[1598]: time="2025-11-01T00:21:52.249898186Z" level=info msg="Start subscribing containerd event" Nov 1 00:21:52.250097 containerd[1598]: time="2025-11-01T00:21:52.250020976Z" level=info msg="Start recovering state" Nov 1 00:21:52.250270 containerd[1598]: time="2025-11-01T00:21:52.250188009Z" level=info msg="Start event monitor" Nov 1 00:21:52.250270 containerd[1598]: time="2025-11-01T00:21:52.250217825Z" level=info msg="Start cni network conf syncer for default" Nov 1 00:21:52.250270 containerd[1598]: time="2025-11-01T00:21:52.250232162Z" level=info msg="Start streaming server" Nov 1 00:21:52.250270 containerd[1598]: time="2025-11-01T00:21:52.250252771Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 1 00:21:52.250270 containerd[1598]: time="2025-11-01T00:21:52.250266416Z" level=info msg="runtime interface starting up..." Nov 1 00:21:52.250430 containerd[1598]: time="2025-11-01T00:21:52.250278759Z" level=info msg="starting plugins..." Nov 1 00:21:52.250430 containerd[1598]: time="2025-11-01T00:21:52.250301442Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 1 00:21:52.251128 containerd[1598]: time="2025-11-01T00:21:52.250943686Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 1 00:21:52.251128 containerd[1598]: time="2025-11-01T00:21:52.251036831Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 1 00:21:52.251438 systemd[1]: Started containerd.service - containerd container runtime. Nov 1 00:21:52.254643 containerd[1598]: time="2025-11-01T00:21:52.254569476Z" level=info msg="containerd successfully booted in 0.530249s" Nov 1 00:21:52.259957 tar[1586]: linux-amd64/README.md Nov 1 00:21:52.296689 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 1 00:21:52.463306 systemd[1698]: Queued start job for default target default.target. Nov 1 00:21:52.556468 systemd[1698]: Created slice app.slice - User Application Slice. Nov 1 00:21:52.556520 systemd[1698]: Reached target paths.target - Paths. Nov 1 00:21:52.556594 systemd[1698]: Reached target timers.target - Timers. Nov 1 00:21:52.559253 systemd[1698]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 1 00:21:52.578163 systemd[1698]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 1 00:21:52.578399 systemd[1698]: Reached target sockets.target - Sockets. Nov 1 00:21:52.578483 systemd[1698]: Reached target basic.target - Basic System. Nov 1 00:21:52.578553 systemd[1698]: Reached target default.target - Main User Target. Nov 1 00:21:52.578605 systemd[1698]: Startup finished in 337ms. Nov 1 00:21:52.579504 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 1 00:21:52.590177 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 1 00:21:52.669759 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:45400.service - OpenSSH per-connection server daemon (10.0.0.1:45400). Nov 1 00:21:52.732616 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 45400 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:21:52.734732 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:52.740344 systemd-logind[1575]: New session 2 of user core. Nov 1 00:21:52.751120 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 1 00:21:52.818227 sshd[1721]: Connection closed by 10.0.0.1 port 45400 Nov 1 00:21:52.819020 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:52.830526 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:45400.service: Deactivated successfully. Nov 1 00:21:52.833538 systemd[1]: session-2.scope: Deactivated successfully. Nov 1 00:21:52.834680 systemd-logind[1575]: Session 2 logged out. Waiting for processes to exit. Nov 1 00:21:52.839070 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:45410.service - OpenSSH per-connection server daemon (10.0.0.1:45410). Nov 1 00:21:52.843091 systemd-logind[1575]: Removed session 2. Nov 1 00:21:52.929064 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 45410 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:21:52.931190 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:21:52.937646 systemd-logind[1575]: New session 3 of user core. Nov 1 00:21:52.947248 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 1 00:21:53.007981 sshd[1730]: Connection closed by 10.0.0.1 port 45410 Nov 1 00:21:53.008313 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Nov 1 00:21:53.014393 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:45410.service: Deactivated successfully. Nov 1 00:21:53.017327 systemd[1]: session-3.scope: Deactivated successfully. Nov 1 00:21:53.018476 systemd-logind[1575]: Session 3 logged out. Waiting for processes to exit. Nov 1 00:21:53.020502 systemd-logind[1575]: Removed session 3. Nov 1 00:21:53.510866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:21:53.514045 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 1 00:21:53.516582 systemd[1]: Startup finished in 3.483s (kernel) + 6.546s (initrd) + 8.185s (userspace) = 18.214s. Nov 1 00:21:53.517339 (kubelet)[1740]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:21:54.285399 kubelet[1740]: E1101 00:21:54.285283 1740 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:21:54.289590 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:21:54.289815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:21:54.290272 systemd[1]: kubelet.service: Consumed 2.291s CPU time, 266M memory peak. Nov 1 00:22:03.030338 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:40296.service - OpenSSH per-connection server daemon (10.0.0.1:40296). Nov 1 00:22:03.109587 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 40296 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:03.112465 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.119711 systemd-logind[1575]: New session 4 of user core. Nov 1 00:22:03.129208 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 1 00:22:03.193471 sshd[1756]: Connection closed by 10.0.0.1 port 40296 Nov 1 00:22:03.194129 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:03.221734 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:40296.service: Deactivated successfully. Nov 1 00:22:03.225588 systemd[1]: session-4.scope: Deactivated successfully. Nov 1 00:22:03.227580 systemd-logind[1575]: Session 4 logged out. Waiting for processes to exit. Nov 1 00:22:03.234885 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:40304.service - OpenSSH per-connection server daemon (10.0.0.1:40304). Nov 1 00:22:03.235740 systemd-logind[1575]: Removed session 4. Nov 1 00:22:03.304770 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 40304 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:03.307067 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.314049 systemd-logind[1575]: New session 5 of user core. Nov 1 00:22:03.328278 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 1 00:22:03.380719 sshd[1765]: Connection closed by 10.0.0.1 port 40304 Nov 1 00:22:03.381020 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:03.398571 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:40304.service: Deactivated successfully. Nov 1 00:22:03.400954 systemd[1]: session-5.scope: Deactivated successfully. Nov 1 00:22:03.401891 systemd-logind[1575]: Session 5 logged out. Waiting for processes to exit. Nov 1 00:22:03.405466 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:40310.service - OpenSSH per-connection server daemon (10.0.0.1:40310). Nov 1 00:22:03.406502 systemd-logind[1575]: Removed session 5. Nov 1 00:22:03.474489 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 40310 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:03.475947 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.481467 systemd-logind[1575]: New session 6 of user core. Nov 1 00:22:03.496202 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 1 00:22:03.554877 sshd[1774]: Connection closed by 10.0.0.1 port 40310 Nov 1 00:22:03.555563 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:03.566312 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:40310.service: Deactivated successfully. Nov 1 00:22:03.568623 systemd[1]: session-6.scope: Deactivated successfully. Nov 1 00:22:03.569539 systemd-logind[1575]: Session 6 logged out. Waiting for processes to exit. Nov 1 00:22:03.573169 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:40312.service - OpenSSH per-connection server daemon (10.0.0.1:40312). Nov 1 00:22:03.574090 systemd-logind[1575]: Removed session 6. Nov 1 00:22:03.632825 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 40312 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:03.634868 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.640397 systemd-logind[1575]: New session 7 of user core. Nov 1 00:22:03.651179 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 1 00:22:03.722003 sudo[1784]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 1 00:22:03.722566 sudo[1784]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:03.743828 sudo[1784]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:03.747047 sshd[1783]: Connection closed by 10.0.0.1 port 40312 Nov 1 00:22:03.747677 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:03.765223 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:40312.service: Deactivated successfully. Nov 1 00:22:03.767920 systemd[1]: session-7.scope: Deactivated successfully. Nov 1 00:22:03.769898 systemd-logind[1575]: Session 7 logged out. Waiting for processes to exit. Nov 1 00:22:03.775509 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:40322.service - OpenSSH per-connection server daemon (10.0.0.1:40322). Nov 1 00:22:03.776437 systemd-logind[1575]: Removed session 7. Nov 1 00:22:03.860553 sshd[1790]: Accepted publickey for core from 10.0.0.1 port 40322 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:03.862620 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:03.869363 systemd-logind[1575]: New session 8 of user core. Nov 1 00:22:03.884435 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 1 00:22:03.950256 sudo[1795]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 1 00:22:03.950751 sudo[1795]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:03.960543 sudo[1795]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:03.972080 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 1 00:22:03.972482 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:03.991501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 1 00:22:04.058260 augenrules[1817]: No rules Nov 1 00:22:04.060310 systemd[1]: audit-rules.service: Deactivated successfully. Nov 1 00:22:04.060635 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 1 00:22:04.062145 sudo[1794]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:04.064503 sshd[1793]: Connection closed by 10.0.0.1 port 40322 Nov 1 00:22:04.064866 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:04.081412 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:40322.service: Deactivated successfully. Nov 1 00:22:04.084469 systemd[1]: session-8.scope: Deactivated successfully. Nov 1 00:22:04.086485 systemd-logind[1575]: Session 8 logged out. Waiting for processes to exit. Nov 1 00:22:04.090101 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:40326.service - OpenSSH per-connection server daemon (10.0.0.1:40326). Nov 1 00:22:04.090832 systemd-logind[1575]: Removed session 8. Nov 1 00:22:04.161901 sshd[1826]: Accepted publickey for core from 10.0.0.1 port 40326 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:22:04.164391 sshd-session[1826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:22:04.172190 systemd-logind[1575]: New session 9 of user core. Nov 1 00:22:04.188213 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 1 00:22:04.249175 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 1 00:22:04.249552 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 1 00:22:04.540685 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 1 00:22:04.543123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:04.767625 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 1 00:22:04.789637 (dockerd)[1854]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 1 00:22:05.167722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:05.183510 (kubelet)[1860]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:05.250432 kubelet[1860]: E1101 00:22:05.250335 1860 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:05.257657 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:05.257989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:05.258542 systemd[1]: kubelet.service: Consumed 380ms CPU time, 111.6M memory peak. Nov 1 00:22:05.419201 dockerd[1854]: time="2025-11-01T00:22:05.418913590Z" level=info msg="Starting up" Nov 1 00:22:05.420335 dockerd[1854]: time="2025-11-01T00:22:05.420273521Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 1 00:22:05.441116 dockerd[1854]: time="2025-11-01T00:22:05.440991334Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 1 00:22:06.476288 dockerd[1854]: time="2025-11-01T00:22:06.476193808Z" level=info msg="Loading containers: start." Nov 1 00:22:06.522961 kernel: Initializing XFRM netlink socket Nov 1 00:22:06.971173 systemd-networkd[1518]: docker0: Link UP Nov 1 00:22:06.978636 dockerd[1854]: time="2025-11-01T00:22:06.978530549Z" level=info msg="Loading containers: done." Nov 1 00:22:07.052742 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3992925059-merged.mount: Deactivated successfully. Nov 1 00:22:07.056790 dockerd[1854]: time="2025-11-01T00:22:07.056727857Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 1 00:22:07.056957 dockerd[1854]: time="2025-11-01T00:22:07.056824458Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 1 00:22:07.056957 dockerd[1854]: time="2025-11-01T00:22:07.056940015Z" level=info msg="Initializing buildkit" Nov 1 00:22:07.949911 dockerd[1854]: time="2025-11-01T00:22:07.949046162Z" level=info msg="Completed buildkit initialization" Nov 1 00:22:07.956522 dockerd[1854]: time="2025-11-01T00:22:07.956475986Z" level=info msg="Daemon has completed initialization" Nov 1 00:22:07.956652 dockerd[1854]: time="2025-11-01T00:22:07.956538172Z" level=info msg="API listen on /run/docker.sock" Nov 1 00:22:07.956855 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 1 00:22:09.286389 containerd[1598]: time="2025-11-01T00:22:09.286296213Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 1 00:22:10.433452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013263478.mount: Deactivated successfully. Nov 1 00:22:11.609216 containerd[1598]: time="2025-11-01T00:22:11.609146336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.610055 containerd[1598]: time="2025-11-01T00:22:11.609986672Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Nov 1 00:22:11.611466 containerd[1598]: time="2025-11-01T00:22:11.611425100Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.614531 containerd[1598]: time="2025-11-01T00:22:11.614490859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:11.615757 containerd[1598]: time="2025-11-01T00:22:11.615731526Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.329351686s" Nov 1 00:22:11.615808 containerd[1598]: time="2025-11-01T00:22:11.615793041Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Nov 1 00:22:11.616500 containerd[1598]: time="2025-11-01T00:22:11.616473187Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 1 00:22:13.383039 containerd[1598]: time="2025-11-01T00:22:13.382963417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.383881 containerd[1598]: time="2025-11-01T00:22:13.383815314Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Nov 1 00:22:13.385000 containerd[1598]: time="2025-11-01T00:22:13.384966172Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.387528 containerd[1598]: time="2025-11-01T00:22:13.387467654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:13.388505 containerd[1598]: time="2025-11-01T00:22:13.388465976Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.771964827s" Nov 1 00:22:13.388505 containerd[1598]: time="2025-11-01T00:22:13.388498857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Nov 1 00:22:13.389124 containerd[1598]: time="2025-11-01T00:22:13.389057595Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 1 00:22:15.021537 containerd[1598]: time="2025-11-01T00:22:15.021444173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:15.043436 containerd[1598]: time="2025-11-01T00:22:15.043380642Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Nov 1 00:22:15.044849 containerd[1598]: time="2025-11-01T00:22:15.044814310Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:15.047644 containerd[1598]: time="2025-11-01T00:22:15.047581530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:15.048676 containerd[1598]: time="2025-11-01T00:22:15.048640245Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.659535161s" Nov 1 00:22:15.048727 containerd[1598]: time="2025-11-01T00:22:15.048678948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Nov 1 00:22:15.049346 containerd[1598]: time="2025-11-01T00:22:15.049304200Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 1 00:22:15.508363 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 1 00:22:15.510377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:15.786709 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:15.810536 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 1 00:22:15.852121 kubelet[2162]: E1101 00:22:15.852046 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 1 00:22:15.856907 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 1 00:22:15.857232 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 1 00:22:15.857741 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.6M memory peak. Nov 1 00:22:16.366824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2995754659.mount: Deactivated successfully. Nov 1 00:22:16.930685 containerd[1598]: time="2025-11-01T00:22:16.930598709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:16.931507 containerd[1598]: time="2025-11-01T00:22:16.931464964Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Nov 1 00:22:16.932643 containerd[1598]: time="2025-11-01T00:22:16.932602106Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:16.934644 containerd[1598]: time="2025-11-01T00:22:16.934586277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:16.935248 containerd[1598]: time="2025-11-01T00:22:16.935202833Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 1.885865781s" Nov 1 00:22:16.935248 containerd[1598]: time="2025-11-01T00:22:16.935239733Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Nov 1 00:22:16.935777 containerd[1598]: time="2025-11-01T00:22:16.935747074Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 1 00:22:17.552235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2655122206.mount: Deactivated successfully. Nov 1 00:22:18.793172 containerd[1598]: time="2025-11-01T00:22:18.793079575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.793839 containerd[1598]: time="2025-11-01T00:22:18.793792152Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Nov 1 00:22:18.795314 containerd[1598]: time="2025-11-01T00:22:18.795245137Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.798076 containerd[1598]: time="2025-11-01T00:22:18.797990335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:18.799203 containerd[1598]: time="2025-11-01T00:22:18.799167913Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.863393288s" Nov 1 00:22:18.799203 containerd[1598]: time="2025-11-01T00:22:18.799201967Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Nov 1 00:22:18.800488 containerd[1598]: time="2025-11-01T00:22:18.800195751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 1 00:22:19.522267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574182143.mount: Deactivated successfully. Nov 1 00:22:19.532949 containerd[1598]: time="2025-11-01T00:22:19.532870698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:19.535088 containerd[1598]: time="2025-11-01T00:22:19.535022183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 1 00:22:19.536185 containerd[1598]: time="2025-11-01T00:22:19.536133898Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:19.539446 containerd[1598]: time="2025-11-01T00:22:19.539368073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 1 00:22:19.540426 containerd[1598]: time="2025-11-01T00:22:19.540379119Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 740.146479ms" Nov 1 00:22:19.540426 containerd[1598]: time="2025-11-01T00:22:19.540419084Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Nov 1 00:22:19.541146 containerd[1598]: time="2025-11-01T00:22:19.541110010Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 1 00:22:20.714759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091704001.mount: Deactivated successfully. Nov 1 00:22:22.760974 containerd[1598]: time="2025-11-01T00:22:22.760829172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.761903 containerd[1598]: time="2025-11-01T00:22:22.761829077Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Nov 1 00:22:22.763323 containerd[1598]: time="2025-11-01T00:22:22.763283024Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.766236 containerd[1598]: time="2025-11-01T00:22:22.766199383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:22.767786 containerd[1598]: time="2025-11-01T00:22:22.767749731Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.226603684s" Nov 1 00:22:22.767786 containerd[1598]: time="2025-11-01T00:22:22.767783053Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Nov 1 00:22:25.111550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:25.111731 systemd[1]: kubelet.service: Consumed 247ms CPU time, 110.6M memory peak. Nov 1 00:22:25.114061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:25.144659 systemd[1]: Reload requested from client PID 2318 ('systemctl') (unit session-9.scope)... Nov 1 00:22:25.144691 systemd[1]: Reloading... Nov 1 00:22:25.262973 zram_generator::config[2365]: No configuration found. Nov 1 00:22:25.636053 systemd[1]: Reloading finished in 490 ms. Nov 1 00:22:25.717881 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 1 00:22:25.718021 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 1 00:22:25.718380 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:25.718438 systemd[1]: kubelet.service: Consumed 182ms CPU time, 98.4M memory peak. Nov 1 00:22:25.720459 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:26.002609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:26.012492 (kubelet)[2410]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:26.074427 kubelet[2410]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:26.074427 kubelet[2410]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:26.074427 kubelet[2410]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:26.074979 kubelet[2410]: I1101 00:22:26.074508 2410 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:26.302433 kubelet[2410]: I1101 00:22:26.302238 2410 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:26.302433 kubelet[2410]: I1101 00:22:26.302405 2410 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:26.302785 kubelet[2410]: I1101 00:22:26.302750 2410 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:26.329088 kubelet[2410]: E1101 00:22:26.329015 2410 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:26.331093 kubelet[2410]: I1101 00:22:26.331041 2410 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:26.339519 kubelet[2410]: I1101 00:22:26.339470 2410 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 00:22:26.346129 kubelet[2410]: I1101 00:22:26.346068 2410 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:26.348150 kubelet[2410]: I1101 00:22:26.348089 2410 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:26.348356 kubelet[2410]: I1101 00:22:26.348130 2410 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:26.348463 kubelet[2410]: I1101 00:22:26.348361 2410 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:26.348463 kubelet[2410]: I1101 00:22:26.348372 2410 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:26.348570 kubelet[2410]: I1101 00:22:26.348548 2410 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:26.351658 kubelet[2410]: I1101 00:22:26.351622 2410 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:26.351658 kubelet[2410]: I1101 00:22:26.351654 2410 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:26.351744 kubelet[2410]: I1101 00:22:26.351695 2410 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:26.351744 kubelet[2410]: I1101 00:22:26.351717 2410 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:26.370824 kubelet[2410]: I1101 00:22:26.370767 2410 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 1 00:22:26.371287 kubelet[2410]: W1101 00:22:26.371200 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:26.371348 kubelet[2410]: I1101 00:22:26.371301 2410 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:26.371348 kubelet[2410]: E1101 00:22:26.371294 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:26.371419 kubelet[2410]: W1101 00:22:26.371404 2410 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 1 00:22:26.371795 kubelet[2410]: W1101 00:22:26.371748 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:26.371795 kubelet[2410]: E1101 00:22:26.371786 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:26.374217 kubelet[2410]: I1101 00:22:26.374161 2410 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:26.374269 kubelet[2410]: I1101 00:22:26.374235 2410 server.go:1287] "Started kubelet" Nov 1 00:22:26.374420 kubelet[2410]: I1101 00:22:26.374384 2410 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:26.376950 kubelet[2410]: I1101 00:22:26.375297 2410 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:26.376950 kubelet[2410]: I1101 00:22:26.375654 2410 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:26.376950 kubelet[2410]: I1101 00:22:26.375676 2410 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:26.379142 kubelet[2410]: E1101 00:22:26.379119 2410 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:26.379682 kubelet[2410]: I1101 00:22:26.379281 2410 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:26.380074 kubelet[2410]: I1101 00:22:26.379325 2410 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:26.380367 kubelet[2410]: I1101 00:22:26.380132 2410 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:26.380421 kubelet[2410]: I1101 00:22:26.380379 2410 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:26.380452 kubelet[2410]: I1101 00:22:26.380425 2410 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:26.380786 kubelet[2410]: W1101 00:22:26.380747 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:26.380847 kubelet[2410]: E1101 00:22:26.380790 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:26.380847 kubelet[2410]: I1101 00:22:26.380824 2410 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:26.380976 kubelet[2410]: I1101 00:22:26.380947 2410 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:26.381033 kubelet[2410]: E1101 00:22:26.380992 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:26.381097 kubelet[2410]: E1101 00:22:26.381073 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" Nov 1 00:22:26.382537 kubelet[2410]: I1101 00:22:26.382509 2410 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:26.382986 kubelet[2410]: E1101 00:22:26.381220 2410 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1873ba24c84eb7fb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-01 00:22:26.374187003 +0000 UTC m=+0.354575217,LastTimestamp:2025-11-01 00:22:26.374187003 +0000 UTC m=+0.354575217,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 1 00:22:26.400881 kubelet[2410]: I1101 00:22:26.400829 2410 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:26.401062 kubelet[2410]: I1101 00:22:26.401047 2410 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:26.401149 kubelet[2410]: I1101 00:22:26.401137 2410 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:26.404997 kubelet[2410]: I1101 00:22:26.404874 2410 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:26.406629 kubelet[2410]: I1101 00:22:26.406556 2410 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:26.406629 kubelet[2410]: I1101 00:22:26.406614 2410 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:26.406703 kubelet[2410]: I1101 00:22:26.406648 2410 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:26.406703 kubelet[2410]: I1101 00:22:26.406665 2410 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:26.406804 kubelet[2410]: E1101 00:22:26.406775 2410 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:26.481228 kubelet[2410]: E1101 00:22:26.481108 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:26.507692 kubelet[2410]: E1101 00:22:26.507593 2410 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:22:26.582362 kubelet[2410]: E1101 00:22:26.582152 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:26.582898 kubelet[2410]: E1101 00:22:26.582823 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" Nov 1 00:22:26.682418 kubelet[2410]: E1101 00:22:26.682314 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:26.708739 kubelet[2410]: E1101 00:22:26.708644 2410 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:22:26.783376 kubelet[2410]: E1101 00:22:26.783254 2410 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 1 00:22:26.809679 kubelet[2410]: W1101 00:22:26.809571 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:26.809679 kubelet[2410]: E1101 00:22:26.809668 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:26.811291 kubelet[2410]: I1101 00:22:26.811202 2410 policy_none.go:49] "None policy: Start" Nov 1 00:22:26.811291 kubelet[2410]: I1101 00:22:26.811265 2410 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:26.811291 kubelet[2410]: I1101 00:22:26.811287 2410 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:26.819923 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 1 00:22:26.837817 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 1 00:22:26.841229 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 1 00:22:26.855959 kubelet[2410]: I1101 00:22:26.855895 2410 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:26.856996 kubelet[2410]: I1101 00:22:26.856215 2410 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:26.856996 kubelet[2410]: I1101 00:22:26.856247 2410 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:26.856996 kubelet[2410]: I1101 00:22:26.856534 2410 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:26.857538 kubelet[2410]: E1101 00:22:26.857515 2410 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:26.857586 kubelet[2410]: E1101 00:22:26.857563 2410 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 1 00:22:26.958183 kubelet[2410]: I1101 00:22:26.958118 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:26.959377 kubelet[2410]: E1101 00:22:26.959334 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Nov 1 00:22:26.984605 kubelet[2410]: E1101 00:22:26.984545 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" Nov 1 00:22:27.122924 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 1 00:22:27.139620 kubelet[2410]: E1101 00:22:27.139562 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:27.145175 systemd[1]: Created slice kubepods-burstable-podc4802903b9b7bb527f6ba7e74626de10.slice - libcontainer container kubepods-burstable-podc4802903b9b7bb527f6ba7e74626de10.slice. Nov 1 00:22:27.147635 kubelet[2410]: E1101 00:22:27.147578 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:27.149402 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 1 00:22:27.151715 kubelet[2410]: E1101 00:22:27.151650 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:27.161290 kubelet[2410]: I1101 00:22:27.161248 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:27.161733 kubelet[2410]: E1101 00:22:27.161691 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Nov 1 00:22:27.185975 kubelet[2410]: I1101 00:22:27.185866 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:27.185975 kubelet[2410]: I1101 00:22:27.185921 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:27.185975 kubelet[2410]: I1101 00:22:27.185993 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:27.186264 kubelet[2410]: I1101 00:22:27.186014 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:27.186264 kubelet[2410]: I1101 00:22:27.186037 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:27.186264 kubelet[2410]: I1101 00:22:27.186138 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:27.186264 kubelet[2410]: I1101 00:22:27.186258 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:27.186393 kubelet[2410]: I1101 00:22:27.186295 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:27.186393 kubelet[2410]: I1101 00:22:27.186346 2410 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:27.440632 kubelet[2410]: E1101 00:22:27.440429 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:27.441500 containerd[1598]: time="2025-11-01T00:22:27.441430639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:27.448746 kubelet[2410]: E1101 00:22:27.448647 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:27.449259 containerd[1598]: time="2025-11-01T00:22:27.449186227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c4802903b9b7bb527f6ba7e74626de10,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:27.452535 kubelet[2410]: E1101 00:22:27.452491 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:27.453112 containerd[1598]: time="2025-11-01T00:22:27.453068540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:27.516612 containerd[1598]: time="2025-11-01T00:22:27.515613324Z" level=info msg="connecting to shim 33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6" address="unix:///run/containerd/s/1b4a6aeb34dbbd01b57acf99a669727fbe6773ea577d7958ebd763d212adbaa9" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:27.544508 containerd[1598]: time="2025-11-01T00:22:27.544445022Z" level=info msg="connecting to shim 52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9" address="unix:///run/containerd/s/f3d808260149aa808cd8c5e1dbda01eb84d3aeffdc4d419a9ac58b0a385eecd4" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:27.564279 kubelet[2410]: I1101 00:22:27.563720 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:27.564982 kubelet[2410]: E1101 00:22:27.564631 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Nov 1 00:22:27.565122 containerd[1598]: time="2025-11-01T00:22:27.564803081Z" level=info msg="connecting to shim 7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3" address="unix:///run/containerd/s/1d261368b2daa7d0e9727fd6a1519010729d9840a3b2211d7ade89f6eb97a0e4" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:27.657672 systemd[1]: Started cri-containerd-33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6.scope - libcontainer container 33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6. Nov 1 00:22:27.662852 kubelet[2410]: W1101 00:22:27.662779 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:27.662986 kubelet[2410]: E1101 00:22:27.662867 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:27.667538 systemd[1]: Started cri-containerd-52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9.scope - libcontainer container 52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9. Nov 1 00:22:27.670748 systemd[1]: Started cri-containerd-7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3.scope - libcontainer container 7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3. Nov 1 00:22:27.786857 kubelet[2410]: E1101 00:22:27.786094 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" Nov 1 00:22:27.902719 kubelet[2410]: W1101 00:22:27.902557 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:27.902719 kubelet[2410]: E1101 00:22:27.902651 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:27.912846 kubelet[2410]: W1101 00:22:27.912733 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:27.912968 kubelet[2410]: E1101 00:22:27.912883 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:28.160819 kubelet[2410]: W1101 00:22:28.160735 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:28.160819 kubelet[2410]: E1101 00:22:28.160790 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:28.367078 kubelet[2410]: I1101 00:22:28.367008 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:28.367541 kubelet[2410]: E1101 00:22:28.367493 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Nov 1 00:22:28.440576 kubelet[2410]: E1101 00:22:28.440404 2410 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.329652 containerd[1598]: time="2025-11-01T00:22:29.329562726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6\"" Nov 1 00:22:29.330833 kubelet[2410]: E1101 00:22:29.330796 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:29.333485 containerd[1598]: time="2025-11-01T00:22:29.333438372Z" level=info msg="CreateContainer within sandbox \"33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 1 00:22:29.386883 kubelet[2410]: E1101 00:22:29.386720 2410 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="3.2s" Nov 1 00:22:29.443711 containerd[1598]: time="2025-11-01T00:22:29.443596988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c4802903b9b7bb527f6ba7e74626de10,Namespace:kube-system,Attempt:0,} returns sandbox id \"52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9\"" Nov 1 00:22:29.444520 kubelet[2410]: E1101 00:22:29.444473 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:29.446998 containerd[1598]: time="2025-11-01T00:22:29.446954668Z" level=info msg="CreateContainer within sandbox \"52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 1 00:22:29.486087 containerd[1598]: time="2025-11-01T00:22:29.486033667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3\"" Nov 1 00:22:29.487032 kubelet[2410]: E1101 00:22:29.486998 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:29.489050 containerd[1598]: time="2025-11-01T00:22:29.489002516Z" level=info msg="CreateContainer within sandbox \"7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 1 00:22:29.703645 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1614383492.mount: Deactivated successfully. Nov 1 00:22:29.711132 containerd[1598]: time="2025-11-01T00:22:29.710401366Z" level=info msg="Container cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:29.724191 containerd[1598]: time="2025-11-01T00:22:29.724119382Z" level=info msg="Container eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:29.728834 containerd[1598]: time="2025-11-01T00:22:29.728749093Z" level=info msg="Container 068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:29.743523 kubelet[2410]: W1101 00:22:29.743449 2410 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused Nov 1 00:22:29.743523 kubelet[2410]: E1101 00:22:29.743523 2410 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" Nov 1 00:22:29.749878 containerd[1598]: time="2025-11-01T00:22:29.749803590Z" level=info msg="CreateContainer within sandbox \"52a1ffb83ce0ea2b2c68602de1ebfb8ae68b80a345f5ebffe4a8f00370a56ec9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad\"" Nov 1 00:22:29.750044 containerd[1598]: time="2025-11-01T00:22:29.749959858Z" level=info msg="CreateContainer within sandbox \"33699f082501edf3f195c286d714d1f91736cc7f6676462082c335d5f089c1b6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d\"" Nov 1 00:22:29.750821 containerd[1598]: time="2025-11-01T00:22:29.750788404Z" level=info msg="StartContainer for \"cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d\"" Nov 1 00:22:29.750879 containerd[1598]: time="2025-11-01T00:22:29.750861063Z" level=info msg="StartContainer for \"eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad\"" Nov 1 00:22:29.752605 containerd[1598]: time="2025-11-01T00:22:29.752569564Z" level=info msg="connecting to shim eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad" address="unix:///run/containerd/s/f3d808260149aa808cd8c5e1dbda01eb84d3aeffdc4d419a9ac58b0a385eecd4" protocol=ttrpc version=3 Nov 1 00:22:29.752692 containerd[1598]: time="2025-11-01T00:22:29.752573361Z" level=info msg="connecting to shim cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d" address="unix:///run/containerd/s/1b4a6aeb34dbbd01b57acf99a669727fbe6773ea577d7958ebd763d212adbaa9" protocol=ttrpc version=3 Nov 1 00:22:29.756114 containerd[1598]: time="2025-11-01T00:22:29.756081487Z" level=info msg="CreateContainer within sandbox \"7a4d26680ab5b5fda3fe758d70dc652144329becf9ae016d81042e1243ed16c3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d\"" Nov 1 00:22:29.756980 containerd[1598]: time="2025-11-01T00:22:29.756915445Z" level=info msg="StartContainer for \"068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d\"" Nov 1 00:22:29.758488 containerd[1598]: time="2025-11-01T00:22:29.758382225Z" level=info msg="connecting to shim 068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d" address="unix:///run/containerd/s/1d261368b2daa7d0e9727fd6a1519010729d9840a3b2211d7ade89f6eb97a0e4" protocol=ttrpc version=3 Nov 1 00:22:29.798475 systemd[1]: Started cri-containerd-eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad.scope - libcontainer container eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad. Nov 1 00:22:29.813509 systemd[1]: Started cri-containerd-068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d.scope - libcontainer container 068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d. Nov 1 00:22:29.851303 systemd[1]: Started cri-containerd-cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d.scope - libcontainer container cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d. Nov 1 00:22:29.929682 containerd[1598]: time="2025-11-01T00:22:29.929617702Z" level=info msg="StartContainer for \"068fee31d3b2692df134a96ca470ec7cf3f5d382635c485348ee8e648568d64d\" returns successfully" Nov 1 00:22:29.932188 containerd[1598]: time="2025-11-01T00:22:29.932157746Z" level=info msg="StartContainer for \"eacf69604211e416eccf599f97e2931030cc086e93f4f82dff2efc539a4bebad\" returns successfully" Nov 1 00:22:29.950421 containerd[1598]: time="2025-11-01T00:22:29.950345818Z" level=info msg="StartContainer for \"cb78ba2992ca6f88cfd638bb3b6750ac7481fd35bdbe9dc8d1b1f0feb774865d\" returns successfully" Nov 1 00:22:29.971591 kubelet[2410]: I1101 00:22:29.971428 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:29.972213 kubelet[2410]: E1101 00:22:29.971912 2410 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" Nov 1 00:22:30.427093 kubelet[2410]: E1101 00:22:30.427044 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:30.427628 kubelet[2410]: E1101 00:22:30.427197 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:30.434694 kubelet[2410]: E1101 00:22:30.434644 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:30.434859 kubelet[2410]: E1101 00:22:30.434835 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:30.443822 kubelet[2410]: E1101 00:22:30.443768 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:30.444003 kubelet[2410]: E1101 00:22:30.443978 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:31.440287 kubelet[2410]: E1101 00:22:31.440243 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:31.440847 kubelet[2410]: E1101 00:22:31.440385 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:31.440847 kubelet[2410]: E1101 00:22:31.440683 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:31.440847 kubelet[2410]: E1101 00:22:31.440760 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:32.355093 kubelet[2410]: I1101 00:22:32.355046 2410 apiserver.go:52] "Watching apiserver" Nov 1 00:22:32.381159 kubelet[2410]: I1101 00:22:32.381113 2410 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:32.441625 kubelet[2410]: E1101 00:22:32.441586 2410 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 1 00:22:32.442075 kubelet[2410]: E1101 00:22:32.441739 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:32.451577 kubelet[2410]: E1101 00:22:32.451474 2410 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 1 00:22:32.591408 kubelet[2410]: E1101 00:22:32.591359 2410 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 1 00:22:32.823402 kubelet[2410]: E1101 00:22:32.823343 2410 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Nov 1 00:22:33.197657 kubelet[2410]: I1101 00:22:33.197466 2410 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:33.363021 kubelet[2410]: I1101 00:22:33.362920 2410 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:22:33.382053 kubelet[2410]: I1101 00:22:33.381976 2410 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:33.648162 kubelet[2410]: I1101 00:22:33.648053 2410 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:33.649342 kubelet[2410]: E1101 00:22:33.648516 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:33.748820 kubelet[2410]: E1101 00:22:33.748708 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:33.749098 kubelet[2410]: I1101 00:22:33.748826 2410 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:33.754084 kubelet[2410]: E1101 00:22:33.754043 2410 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:35.470371 systemd[1]: Reload requested from client PID 2684 ('systemctl') (unit session-9.scope)... Nov 1 00:22:35.470394 systemd[1]: Reloading... Nov 1 00:22:35.583018 zram_generator::config[2729]: No configuration found. Nov 1 00:22:35.978182 systemd[1]: Reloading finished in 507 ms. Nov 1 00:22:36.027203 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:36.051708 systemd[1]: kubelet.service: Deactivated successfully. Nov 1 00:22:36.053118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:36.053207 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 133M memory peak. Nov 1 00:22:36.055619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 1 00:22:36.302185 update_engine[1576]: I20251101 00:22:36.302047 1576 update_attempter.cc:509] Updating boot flags... Nov 1 00:22:36.491795 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 1 00:22:36.509543 (kubelet)[2776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 1 00:22:36.631060 kubelet[2776]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:36.631060 kubelet[2776]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 1 00:22:36.631060 kubelet[2776]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 1 00:22:36.631569 kubelet[2776]: I1101 00:22:36.631504 2776 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 1 00:22:36.644828 kubelet[2776]: I1101 00:22:36.643585 2776 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 1 00:22:36.644828 kubelet[2776]: I1101 00:22:36.643622 2776 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 1 00:22:36.644828 kubelet[2776]: I1101 00:22:36.643892 2776 server.go:954] "Client rotation is on, will bootstrap in background" Nov 1 00:22:36.647360 kubelet[2776]: I1101 00:22:36.647321 2776 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 1 00:22:36.651263 kubelet[2776]: I1101 00:22:36.651221 2776 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 1 00:22:36.658001 kubelet[2776]: I1101 00:22:36.657956 2776 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 1 00:22:36.672189 kubelet[2776]: I1101 00:22:36.672108 2776 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 1 00:22:36.673000 kubelet[2776]: I1101 00:22:36.672425 2776 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 1 00:22:36.673000 kubelet[2776]: I1101 00:22:36.672466 2776 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 1 00:22:36.673000 kubelet[2776]: I1101 00:22:36.672649 2776 topology_manager.go:138] "Creating topology manager with none policy" Nov 1 00:22:36.673000 kubelet[2776]: I1101 00:22:36.672657 2776 container_manager_linux.go:304] "Creating device plugin manager" Nov 1 00:22:36.673223 kubelet[2776]: I1101 00:22:36.672713 2776 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:36.673223 kubelet[2776]: I1101 00:22:36.672900 2776 kubelet.go:446] "Attempting to sync node with API server" Nov 1 00:22:36.673223 kubelet[2776]: I1101 00:22:36.672946 2776 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 1 00:22:36.673223 kubelet[2776]: I1101 00:22:36.672997 2776 kubelet.go:352] "Adding apiserver pod source" Nov 1 00:22:36.675111 kubelet[2776]: I1101 00:22:36.673010 2776 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 1 00:22:36.675398 kubelet[2776]: I1101 00:22:36.675254 2776 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 1 00:22:36.675730 kubelet[2776]: I1101 00:22:36.675689 2776 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 1 00:22:36.676341 kubelet[2776]: I1101 00:22:36.676292 2776 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 1 00:22:36.676341 kubelet[2776]: I1101 00:22:36.676337 2776 server.go:1287] "Started kubelet" Nov 1 00:22:36.676849 kubelet[2776]: I1101 00:22:36.676786 2776 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 1 00:22:36.678163 kubelet[2776]: I1101 00:22:36.678133 2776 server.go:479] "Adding debug handlers to kubelet server" Nov 1 00:22:36.679534 kubelet[2776]: I1101 00:22:36.679474 2776 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 1 00:22:36.679790 kubelet[2776]: I1101 00:22:36.679760 2776 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 1 00:22:36.679860 kubelet[2776]: I1101 00:22:36.679513 2776 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 1 00:22:36.683528 kubelet[2776]: I1101 00:22:36.683430 2776 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 1 00:22:36.693961 kubelet[2776]: I1101 00:22:36.693831 2776 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 1 00:22:36.701600 kubelet[2776]: I1101 00:22:36.701508 2776 reconciler.go:26] "Reconciler: start to sync state" Nov 1 00:22:36.701600 kubelet[2776]: I1101 00:22:36.701542 2776 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 1 00:22:36.705658 kubelet[2776]: E1101 00:22:36.705614 2776 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 1 00:22:36.707186 kubelet[2776]: I1101 00:22:36.707153 2776 factory.go:221] Registration of the containerd container factory successfully Nov 1 00:22:36.707186 kubelet[2776]: I1101 00:22:36.707180 2776 factory.go:221] Registration of the systemd container factory successfully Nov 1 00:22:36.708891 kubelet[2776]: I1101 00:22:36.707326 2776 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 1 00:22:36.743857 kubelet[2776]: I1101 00:22:36.743769 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 1 00:22:36.748075 kubelet[2776]: I1101 00:22:36.748011 2776 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 1 00:22:36.748812 kubelet[2776]: I1101 00:22:36.748790 2776 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 1 00:22:36.749734 kubelet[2776]: I1101 00:22:36.749714 2776 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 1 00:22:36.751522 kubelet[2776]: I1101 00:22:36.749796 2776 kubelet.go:2382] "Starting kubelet main sync loop" Nov 1 00:22:36.751522 kubelet[2776]: E1101 00:22:36.749870 2776 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 1 00:22:36.841748 kubelet[2776]: I1101 00:22:36.841720 2776 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 1 00:22:36.841977 kubelet[2776]: I1101 00:22:36.841963 2776 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 1 00:22:36.842090 kubelet[2776]: I1101 00:22:36.842078 2776 state_mem.go:36] "Initialized new in-memory state store" Nov 1 00:22:36.842341 kubelet[2776]: I1101 00:22:36.842323 2776 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 1 00:22:36.843243 kubelet[2776]: I1101 00:22:36.842973 2776 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 1 00:22:36.843243 kubelet[2776]: I1101 00:22:36.843005 2776 policy_none.go:49] "None policy: Start" Nov 1 00:22:36.843243 kubelet[2776]: I1101 00:22:36.843015 2776 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 1 00:22:36.843243 kubelet[2776]: I1101 00:22:36.843027 2776 state_mem.go:35] "Initializing new in-memory state store" Nov 1 00:22:36.843243 kubelet[2776]: I1101 00:22:36.843155 2776 state_mem.go:75] "Updated machine memory state" Nov 1 00:22:36.850023 kubelet[2776]: E1101 00:22:36.849973 2776 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 1 00:22:36.911682 kubelet[2776]: I1101 00:22:36.910466 2776 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 1 00:22:36.911682 kubelet[2776]: I1101 00:22:36.910783 2776 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 1 00:22:36.911682 kubelet[2776]: I1101 00:22:36.910803 2776 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 1 00:22:36.911682 kubelet[2776]: I1101 00:22:36.911254 2776 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 1 00:22:36.914261 kubelet[2776]: E1101 00:22:36.914227 2776 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 1 00:22:37.025970 kubelet[2776]: I1101 00:22:37.025900 2776 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 1 00:22:37.039784 kubelet[2776]: I1101 00:22:37.039726 2776 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 1 00:22:37.040008 kubelet[2776]: I1101 00:22:37.039851 2776 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 1 00:22:37.051648 kubelet[2776]: I1101 00:22:37.051560 2776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:37.052816 kubelet[2776]: I1101 00:22:37.052781 2776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.053114 kubelet[2776]: I1101 00:22:37.053084 2776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:37.064562 kubelet[2776]: E1101 00:22:37.064498 2776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:37.064781 kubelet[2776]: E1101 00:22:37.064663 2776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.064781 kubelet[2776]: E1101 00:22:37.064721 2776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:37.103584 kubelet[2776]: I1101 00:22:37.103484 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:37.103584 kubelet[2776]: I1101 00:22:37.103556 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.103584 kubelet[2776]: I1101 00:22:37.103581 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.103584 kubelet[2776]: I1101 00:22:37.103601 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.103895 kubelet[2776]: I1101 00:22:37.103627 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.103895 kubelet[2776]: I1101 00:22:37.103650 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.103895 kubelet[2776]: I1101 00:22:37.103673 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:37.103895 kubelet[2776]: I1101 00:22:37.103693 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:37.103895 kubelet[2776]: I1101 00:22:37.103723 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c4802903b9b7bb527f6ba7e74626de10-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c4802903b9b7bb527f6ba7e74626de10\") " pod="kube-system/kube-apiserver-localhost" Nov 1 00:22:37.364922 kubelet[2776]: E1101 00:22:37.364860 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:37.365111 kubelet[2776]: E1101 00:22:37.365076 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:37.365111 kubelet[2776]: E1101 00:22:37.365104 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:37.674632 kubelet[2776]: I1101 00:22:37.674465 2776 apiserver.go:52] "Watching apiserver" Nov 1 00:22:37.702651 kubelet[2776]: I1101 00:22:37.702580 2776 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 1 00:22:37.794786 kubelet[2776]: I1101 00:22:37.794730 2776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:37.794981 kubelet[2776]: E1101 00:22:37.794952 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:37.795684 kubelet[2776]: I1101 00:22:37.795652 2776 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:38.212562 kubelet[2776]: E1101 00:22:38.212248 2776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 1 00:22:38.212562 kubelet[2776]: I1101 00:22:38.212375 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=5.212351797 podStartE2EDuration="5.212351797s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:38.212344414 +0000 UTC m=+1.696501162" watchObservedRunningTime="2025-11-01 00:22:38.212351797 +0000 UTC m=+1.696508545" Nov 1 00:22:38.212562 kubelet[2776]: E1101 00:22:38.212477 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.253291 kubelet[2776]: E1101 00:22:38.252849 2776 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 1 00:22:38.253291 kubelet[2776]: E1101 00:22:38.253124 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.795345 kubelet[2776]: I1101 00:22:38.795060 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.795011413 podStartE2EDuration="5.795011413s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:38.687265834 +0000 UTC m=+2.171422582" watchObservedRunningTime="2025-11-01 00:22:38.795011413 +0000 UTC m=+2.279168161" Nov 1 00:22:38.795345 kubelet[2776]: I1101 00:22:38.795184 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=5.795176636 podStartE2EDuration="5.795176636s" podCreationTimestamp="2025-11-01 00:22:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:38.794180191 +0000 UTC m=+2.278336939" watchObservedRunningTime="2025-11-01 00:22:38.795176636 +0000 UTC m=+2.279333384" Nov 1 00:22:38.797488 kubelet[2776]: E1101 00:22:38.797451 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.797764 kubelet[2776]: E1101 00:22:38.797737 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:38.797861 kubelet[2776]: E1101 00:22:38.797790 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:39.803099 kubelet[2776]: E1101 00:22:39.803059 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:40.247792 kubelet[2776]: I1101 00:22:40.247652 2776 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 1 00:22:40.248060 containerd[1598]: time="2025-11-01T00:22:40.248016025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 1 00:22:40.248489 kubelet[2776]: I1101 00:22:40.248230 2776 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 1 00:22:40.908561 systemd[1]: Created slice kubepods-besteffort-pod8b995a79_9a86_47c6_be0c_07dadc5f217f.slice - libcontainer container kubepods-besteffort-pod8b995a79_9a86_47c6_be0c_07dadc5f217f.slice. Nov 1 00:22:41.025636 kubelet[2776]: I1101 00:22:41.025575 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b995a79-9a86-47c6-be0c-07dadc5f217f-kube-proxy\") pod \"kube-proxy-snd4c\" (UID: \"8b995a79-9a86-47c6-be0c-07dadc5f217f\") " pod="kube-system/kube-proxy-snd4c" Nov 1 00:22:41.026117 kubelet[2776]: I1101 00:22:41.025711 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr9q8\" (UniqueName: \"kubernetes.io/projected/8b995a79-9a86-47c6-be0c-07dadc5f217f-kube-api-access-hr9q8\") pod \"kube-proxy-snd4c\" (UID: \"8b995a79-9a86-47c6-be0c-07dadc5f217f\") " pod="kube-system/kube-proxy-snd4c" Nov 1 00:22:41.026117 kubelet[2776]: I1101 00:22:41.025790 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b995a79-9a86-47c6-be0c-07dadc5f217f-xtables-lock\") pod \"kube-proxy-snd4c\" (UID: \"8b995a79-9a86-47c6-be0c-07dadc5f217f\") " pod="kube-system/kube-proxy-snd4c" Nov 1 00:22:41.026117 kubelet[2776]: I1101 00:22:41.025817 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b995a79-9a86-47c6-be0c-07dadc5f217f-lib-modules\") pod \"kube-proxy-snd4c\" (UID: \"8b995a79-9a86-47c6-be0c-07dadc5f217f\") " pod="kube-system/kube-proxy-snd4c" Nov 1 00:22:41.221854 kubelet[2776]: E1101 00:22:41.221720 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:41.222483 containerd[1598]: time="2025-11-01T00:22:41.222409162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snd4c,Uid:8b995a79-9a86-47c6-be0c-07dadc5f217f,Namespace:kube-system,Attempt:0,}" Nov 1 00:22:41.246620 containerd[1598]: time="2025-11-01T00:22:41.246569269Z" level=info msg="connecting to shim 988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45" address="unix:///run/containerd/s/e3bdd18f04570bc2e1af937ff24e82ba35b764224f226420a6166703ed5f74a3" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:41.273325 systemd[1]: Started cri-containerd-988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45.scope - libcontainer container 988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45. Nov 1 00:22:41.316362 containerd[1598]: time="2025-11-01T00:22:41.316314777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-snd4c,Uid:8b995a79-9a86-47c6-be0c-07dadc5f217f,Namespace:kube-system,Attempt:0,} returns sandbox id \"988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45\"" Nov 1 00:22:41.320979 kubelet[2776]: E1101 00:22:41.320918 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:41.330424 containerd[1598]: time="2025-11-01T00:22:41.330377839Z" level=info msg="CreateContainer within sandbox \"988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 1 00:22:41.331202 systemd[1]: Created slice kubepods-besteffort-podbe8b1263_f4d3_4c68_963b_60424939ccba.slice - libcontainer container kubepods-besteffort-podbe8b1263_f4d3_4c68_963b_60424939ccba.slice. Nov 1 00:22:41.349010 containerd[1598]: time="2025-11-01T00:22:41.348967198Z" level=info msg="Container 5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:41.353056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175377738.mount: Deactivated successfully. Nov 1 00:22:41.358147 containerd[1598]: time="2025-11-01T00:22:41.358104891Z" level=info msg="CreateContainer within sandbox \"988e32dd268889210cb9df3347a5150483152f7c5050f1f0a1145e6c01a18f45\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda\"" Nov 1 00:22:41.359007 containerd[1598]: time="2025-11-01T00:22:41.358804402Z" level=info msg="StartContainer for \"5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda\"" Nov 1 00:22:41.360301 containerd[1598]: time="2025-11-01T00:22:41.360270098Z" level=info msg="connecting to shim 5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda" address="unix:///run/containerd/s/e3bdd18f04570bc2e1af937ff24e82ba35b764224f226420a6166703ed5f74a3" protocol=ttrpc version=3 Nov 1 00:22:41.385203 systemd[1]: Started cri-containerd-5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda.scope - libcontainer container 5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda. Nov 1 00:22:41.427621 kubelet[2776]: I1101 00:22:41.427581 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2j9\" (UniqueName: \"kubernetes.io/projected/be8b1263-f4d3-4c68-963b-60424939ccba-kube-api-access-ww2j9\") pod \"tigera-operator-7dcd859c48-4lbpm\" (UID: \"be8b1263-f4d3-4c68-963b-60424939ccba\") " pod="tigera-operator/tigera-operator-7dcd859c48-4lbpm" Nov 1 00:22:41.427810 kubelet[2776]: I1101 00:22:41.427794 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/be8b1263-f4d3-4c68-963b-60424939ccba-var-lib-calico\") pod \"tigera-operator-7dcd859c48-4lbpm\" (UID: \"be8b1263-f4d3-4c68-963b-60424939ccba\") " pod="tigera-operator/tigera-operator-7dcd859c48-4lbpm" Nov 1 00:22:41.434518 containerd[1598]: time="2025-11-01T00:22:41.434467592Z" level=info msg="StartContainer for \"5d3f12d659fc270a3d8dde9cec5b2cacb552861bcce6d78f85910a573b687dda\" returns successfully" Nov 1 00:22:41.637378 containerd[1598]: time="2025-11-01T00:22:41.637318278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4lbpm,Uid:be8b1263-f4d3-4c68-963b-60424939ccba,Namespace:tigera-operator,Attempt:0,}" Nov 1 00:22:41.686668 containerd[1598]: time="2025-11-01T00:22:41.686578366Z" level=info msg="connecting to shim a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3" address="unix:///run/containerd/s/ee39ceac8c1183b3c4da2cac205da682ca6759566e4b01750b873fc32b082e12" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:41.744232 systemd[1]: Started cri-containerd-a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3.scope - libcontainer container a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3. Nov 1 00:22:41.796449 containerd[1598]: time="2025-11-01T00:22:41.796394833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-4lbpm,Uid:be8b1263-f4d3-4c68-963b-60424939ccba,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3\"" Nov 1 00:22:41.799765 containerd[1598]: time="2025-11-01T00:22:41.798797099Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 1 00:22:41.810591 kubelet[2776]: E1101 00:22:41.810542 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:41.824174 kubelet[2776]: I1101 00:22:41.824070 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-snd4c" podStartSLOduration=1.824040812 podStartE2EDuration="1.824040812s" podCreationTimestamp="2025-11-01 00:22:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:22:41.821764193 +0000 UTC m=+5.305920941" watchObservedRunningTime="2025-11-01 00:22:41.824040812 +0000 UTC m=+5.308197570" Nov 1 00:22:42.405089 kubelet[2776]: E1101 00:22:42.405034 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:42.811779 kubelet[2776]: E1101 00:22:42.811735 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:43.107388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount169782801.mount: Deactivated successfully. Nov 1 00:22:43.484518 containerd[1598]: time="2025-11-01T00:22:43.484358912Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:43.485453 containerd[1598]: time="2025-11-01T00:22:43.485377314Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 1 00:22:43.486629 containerd[1598]: time="2025-11-01T00:22:43.486581966Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:43.488651 containerd[1598]: time="2025-11-01T00:22:43.488616244Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:43.489257 containerd[1598]: time="2025-11-01T00:22:43.489228740Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 1.690352091s" Nov 1 00:22:43.489313 containerd[1598]: time="2025-11-01T00:22:43.489259137Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 1 00:22:43.492960 containerd[1598]: time="2025-11-01T00:22:43.492909123Z" level=info msg="CreateContainer within sandbox \"a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 1 00:22:43.503202 containerd[1598]: time="2025-11-01T00:22:43.503121960Z" level=info msg="Container 8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:43.509782 containerd[1598]: time="2025-11-01T00:22:43.509734886Z" level=info msg="CreateContainer within sandbox \"a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\"" Nov 1 00:22:43.511681 containerd[1598]: time="2025-11-01T00:22:43.510452200Z" level=info msg="StartContainer for \"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\"" Nov 1 00:22:43.511681 containerd[1598]: time="2025-11-01T00:22:43.511458688Z" level=info msg="connecting to shim 8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372" address="unix:///run/containerd/s/ee39ceac8c1183b3c4da2cac205da682ca6759566e4b01750b873fc32b082e12" protocol=ttrpc version=3 Nov 1 00:22:43.550150 systemd[1]: Started cri-containerd-8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372.scope - libcontainer container 8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372. Nov 1 00:22:43.583795 containerd[1598]: time="2025-11-01T00:22:43.583747286Z" level=info msg="StartContainer for \"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\" returns successfully" Nov 1 00:22:43.824089 kubelet[2776]: I1101 00:22:43.823995 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-4lbpm" podStartSLOduration=1.132076293 podStartE2EDuration="2.823971356s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="2025-11-01 00:22:41.798174904 +0000 UTC m=+5.282331652" lastFinishedPulling="2025-11-01 00:22:43.490069967 +0000 UTC m=+6.974226715" observedRunningTime="2025-11-01 00:22:43.823289659 +0000 UTC m=+7.307446407" watchObservedRunningTime="2025-11-01 00:22:43.823971356 +0000 UTC m=+7.308128114" Nov 1 00:22:45.313791 kubelet[2776]: E1101 00:22:45.313740 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:45.821431 kubelet[2776]: E1101 00:22:45.821353 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:46.299144 systemd[1]: cri-containerd-8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372.scope: Deactivated successfully. Nov 1 00:22:46.306496 containerd[1598]: time="2025-11-01T00:22:46.306440329Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\" id:\"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\" pid:3117 exit_status:1 exited_at:{seconds:1761956566 nanos:304304333}" Nov 1 00:22:46.306496 containerd[1598]: time="2025-11-01T00:22:46.306486335Z" level=info msg="received exit event container_id:\"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\" id:\"8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372\" pid:3117 exit_status:1 exited_at:{seconds:1761956566 nanos:304304333}" Nov 1 00:22:46.343441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372-rootfs.mount: Deactivated successfully. Nov 1 00:22:46.825970 kubelet[2776]: I1101 00:22:46.825563 2776 scope.go:117] "RemoveContainer" containerID="8d12ddf6a2e9efc13a0919adef6f272716e2bae839189eeae31099e5f298f372" Nov 1 00:22:46.829291 containerd[1598]: time="2025-11-01T00:22:46.829238822Z" level=info msg="CreateContainer within sandbox \"a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 1 00:22:46.843233 containerd[1598]: time="2025-11-01T00:22:46.843106989Z" level=info msg="Container 7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:46.845169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696934097.mount: Deactivated successfully. Nov 1 00:22:46.852648 containerd[1598]: time="2025-11-01T00:22:46.852591554Z" level=info msg="CreateContainer within sandbox \"a00d9004f0ffe5bb16c68759f8bb5abe1acd871790287459c33747aaabd184e3\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d\"" Nov 1 00:22:46.853250 containerd[1598]: time="2025-11-01T00:22:46.853215008Z" level=info msg="StartContainer for \"7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d\"" Nov 1 00:22:46.854718 containerd[1598]: time="2025-11-01T00:22:46.854683606Z" level=info msg="connecting to shim 7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d" address="unix:///run/containerd/s/ee39ceac8c1183b3c4da2cac205da682ca6759566e4b01750b873fc32b082e12" protocol=ttrpc version=3 Nov 1 00:22:46.885264 systemd[1]: Started cri-containerd-7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d.scope - libcontainer container 7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d. Nov 1 00:22:46.941486 containerd[1598]: time="2025-11-01T00:22:46.941432603Z" level=info msg="StartContainer for \"7031256a1516ea7dd18167df143dfd61a91e091188cbc8d2394bc620dcec0d1d\" returns successfully" Nov 1 00:22:48.972771 kubelet[2776]: E1101 00:22:48.972686 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:49.590252 sudo[1830]: pam_unix(sudo:session): session closed for user root Nov 1 00:22:49.592374 sshd[1829]: Connection closed by 10.0.0.1 port 40326 Nov 1 00:22:49.592910 sshd-session[1826]: pam_unix(sshd:session): session closed for user core Nov 1 00:22:49.598038 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:40326.service: Deactivated successfully. Nov 1 00:22:49.601123 systemd[1]: session-9.scope: Deactivated successfully. Nov 1 00:22:49.601408 systemd[1]: session-9.scope: Consumed 5.408s CPU time, 223M memory peak. Nov 1 00:22:49.603096 systemd-logind[1575]: Session 9 logged out. Waiting for processes to exit. Nov 1 00:22:49.604624 systemd-logind[1575]: Removed session 9. Nov 1 00:22:49.833105 kubelet[2776]: E1101 00:22:49.833062 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.041711 systemd[1]: Created slice kubepods-besteffort-pod5eccb2ea_883e_452e_a957_cfc4aa5f4499.slice - libcontainer container kubepods-besteffort-pod5eccb2ea_883e_452e_a957_cfc4aa5f4499.slice. Nov 1 00:22:57.137618 kubelet[2776]: I1101 00:22:57.137534 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5eccb2ea-883e-452e-a957-cfc4aa5f4499-tigera-ca-bundle\") pod \"calico-typha-574965fb8f-5p8rb\" (UID: \"5eccb2ea-883e-452e-a957-cfc4aa5f4499\") " pod="calico-system/calico-typha-574965fb8f-5p8rb" Nov 1 00:22:57.137618 kubelet[2776]: I1101 00:22:57.137591 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5eccb2ea-883e-452e-a957-cfc4aa5f4499-typha-certs\") pod \"calico-typha-574965fb8f-5p8rb\" (UID: \"5eccb2ea-883e-452e-a957-cfc4aa5f4499\") " pod="calico-system/calico-typha-574965fb8f-5p8rb" Nov 1 00:22:57.138264 kubelet[2776]: I1101 00:22:57.137640 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr26g\" (UniqueName: \"kubernetes.io/projected/5eccb2ea-883e-452e-a957-cfc4aa5f4499-kube-api-access-pr26g\") pod \"calico-typha-574965fb8f-5p8rb\" (UID: \"5eccb2ea-883e-452e-a957-cfc4aa5f4499\") " pod="calico-system/calico-typha-574965fb8f-5p8rb" Nov 1 00:22:57.226943 systemd[1]: Created slice kubepods-besteffort-podaf3d61a3_e7b1_4aa9_b176_9c115261a212.slice - libcontainer container kubepods-besteffort-podaf3d61a3_e7b1_4aa9_b176_9c115261a212.slice. Nov 1 00:22:57.339205 kubelet[2776]: I1101 00:22:57.339115 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-policysync\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339205 kubelet[2776]: I1101 00:22:57.339182 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af3d61a3-e7b1-4aa9-b176-9c115261a212-tigera-ca-bundle\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339205 kubelet[2776]: I1101 00:22:57.339204 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-var-lib-calico\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339205 kubelet[2776]: I1101 00:22:57.339224 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-flexvol-driver-host\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339524 kubelet[2776]: I1101 00:22:57.339246 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/af3d61a3-e7b1-4aa9-b176-9c115261a212-node-certs\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339524 kubelet[2776]: I1101 00:22:57.339350 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-lib-modules\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339524 kubelet[2776]: I1101 00:22:57.339430 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-xtables-lock\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339524 kubelet[2776]: I1101 00:22:57.339452 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-var-run-calico\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339524 kubelet[2776]: I1101 00:22:57.339481 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-cni-bin-dir\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339687 kubelet[2776]: I1101 00:22:57.339501 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-cni-net-dir\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339687 kubelet[2776]: I1101 00:22:57.339522 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/af3d61a3-e7b1-4aa9-b176-9c115261a212-cni-log-dir\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.339687 kubelet[2776]: I1101 00:22:57.339544 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g569l\" (UniqueName: \"kubernetes.io/projected/af3d61a3-e7b1-4aa9-b176-9c115261a212-kube-api-access-g569l\") pod \"calico-node-6pxnw\" (UID: \"af3d61a3-e7b1-4aa9-b176-9c115261a212\") " pod="calico-system/calico-node-6pxnw" Nov 1 00:22:57.350782 kubelet[2776]: E1101 00:22:57.350704 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.351554 containerd[1598]: time="2025-11-01T00:22:57.351478293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574965fb8f-5p8rb,Uid:5eccb2ea-883e-452e-a957-cfc4aa5f4499,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:57.377142 containerd[1598]: time="2025-11-01T00:22:57.376970455Z" level=info msg="connecting to shim 448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34" address="unix:///run/containerd/s/71dec5ca4d269079f5b6f88cd0c0df48afe0e8f9f4cc37fa97ef359e30599c8d" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:57.414263 systemd[1]: Started cri-containerd-448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34.scope - libcontainer container 448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34. Nov 1 00:22:57.442117 kubelet[2776]: E1101 00:22:57.442065 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.442117 kubelet[2776]: W1101 00:22:57.442106 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.442117 kubelet[2776]: E1101 00:22:57.442157 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442369 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.443495 kubelet[2776]: W1101 00:22:57.442380 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442392 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442580 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.443495 kubelet[2776]: W1101 00:22:57.442593 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442604 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442844 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.443495 kubelet[2776]: W1101 00:22:57.442855 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.442873 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.443495 kubelet[2776]: E1101 00:22:57.443161 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.445206 kubelet[2776]: W1101 00:22:57.443173 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.445206 kubelet[2776]: E1101 00:22:57.443185 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.453044 kubelet[2776]: E1101 00:22:57.451290 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.453044 kubelet[2776]: W1101 00:22:57.451323 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.453044 kubelet[2776]: E1101 00:22:57.451351 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.472146 kubelet[2776]: E1101 00:22:57.472084 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.472146 kubelet[2776]: W1101 00:22:57.472132 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.472825 kubelet[2776]: E1101 00:22:57.472165 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.482338 kubelet[2776]: E1101 00:22:57.479922 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:22:57.494497 containerd[1598]: time="2025-11-01T00:22:57.494432162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574965fb8f-5p8rb,Uid:5eccb2ea-883e-452e-a957-cfc4aa5f4499,Namespace:calico-system,Attempt:0,} returns sandbox id \"448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34\"" Nov 1 00:22:57.495460 kubelet[2776]: E1101 00:22:57.495426 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.496775 containerd[1598]: time="2025-11-01T00:22:57.496729059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 1 00:22:57.536138 kubelet[2776]: E1101 00:22:57.536078 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.537132 containerd[1598]: time="2025-11-01T00:22:57.537052824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pxnw,Uid:af3d61a3-e7b1-4aa9-b176-9c115261a212,Namespace:calico-system,Attempt:0,}" Nov 1 00:22:57.542640 kubelet[2776]: E1101 00:22:57.542592 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.542640 kubelet[2776]: W1101 00:22:57.542624 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.542825 kubelet[2776]: E1101 00:22:57.542652 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.542957 kubelet[2776]: E1101 00:22:57.542916 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.543037 kubelet[2776]: W1101 00:22:57.543016 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.543037 kubelet[2776]: E1101 00:22:57.543032 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.543307 kubelet[2776]: E1101 00:22:57.543275 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.543307 kubelet[2776]: W1101 00:22:57.543287 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.543307 kubelet[2776]: E1101 00:22:57.543299 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.543573 kubelet[2776]: E1101 00:22:57.543553 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.543573 kubelet[2776]: W1101 00:22:57.543568 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.543659 kubelet[2776]: E1101 00:22:57.543579 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.543843 kubelet[2776]: E1101 00:22:57.543826 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.543843 kubelet[2776]: W1101 00:22:57.543840 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.543950 kubelet[2776]: E1101 00:22:57.543852 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.544143 kubelet[2776]: E1101 00:22:57.544124 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.544143 kubelet[2776]: W1101 00:22:57.544138 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.544234 kubelet[2776]: E1101 00:22:57.544150 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.544379 kubelet[2776]: E1101 00:22:57.544358 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.544379 kubelet[2776]: W1101 00:22:57.544370 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.544451 kubelet[2776]: E1101 00:22:57.544382 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.544587 kubelet[2776]: E1101 00:22:57.544568 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.544587 kubelet[2776]: W1101 00:22:57.544581 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.544656 kubelet[2776]: E1101 00:22:57.544592 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.544844 kubelet[2776]: E1101 00:22:57.544825 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.544844 kubelet[2776]: W1101 00:22:57.544838 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.544954 kubelet[2776]: E1101 00:22:57.544849 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.545603 kubelet[2776]: E1101 00:22:57.545583 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.545603 kubelet[2776]: W1101 00:22:57.545597 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.545686 kubelet[2776]: E1101 00:22:57.545608 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.545833 kubelet[2776]: E1101 00:22:57.545813 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.545833 kubelet[2776]: W1101 00:22:57.545826 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.545944 kubelet[2776]: E1101 00:22:57.545857 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.546319 kubelet[2776]: E1101 00:22:57.546284 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.546319 kubelet[2776]: W1101 00:22:57.546318 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.546407 kubelet[2776]: E1101 00:22:57.546332 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.546574 kubelet[2776]: E1101 00:22:57.546539 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.546574 kubelet[2776]: W1101 00:22:57.546553 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.546574 kubelet[2776]: E1101 00:22:57.546564 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.546765 kubelet[2776]: E1101 00:22:57.546744 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.546765 kubelet[2776]: W1101 00:22:57.546756 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.546765 kubelet[2776]: E1101 00:22:57.546766 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.547013 kubelet[2776]: E1101 00:22:57.546991 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.547013 kubelet[2776]: W1101 00:22:57.547004 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.547013 kubelet[2776]: E1101 00:22:57.547016 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.547213 kubelet[2776]: E1101 00:22:57.547193 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.547213 kubelet[2776]: W1101 00:22:57.547205 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.547213 kubelet[2776]: E1101 00:22:57.547215 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.547423 kubelet[2776]: E1101 00:22:57.547403 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.547423 kubelet[2776]: W1101 00:22:57.547415 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.547423 kubelet[2776]: E1101 00:22:57.547424 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.547617 kubelet[2776]: E1101 00:22:57.547597 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.547617 kubelet[2776]: W1101 00:22:57.547609 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.547716 kubelet[2776]: E1101 00:22:57.547619 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.547817 kubelet[2776]: E1101 00:22:57.547798 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.547817 kubelet[2776]: W1101 00:22:57.547810 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.547922 kubelet[2776]: E1101 00:22:57.547820 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.548061 kubelet[2776]: E1101 00:22:57.548042 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.548103 kubelet[2776]: W1101 00:22:57.548054 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.548103 kubelet[2776]: E1101 00:22:57.548096 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.563748 containerd[1598]: time="2025-11-01T00:22:57.563689258Z" level=info msg="connecting to shim b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7" address="unix:///run/containerd/s/76750c96fb5f231c51ec3fe1d59468a16a3e71b424b73caaa8960bd73b68cb14" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:22:57.603171 systemd[1]: Started cri-containerd-b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7.scope - libcontainer container b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7. Nov 1 00:22:57.639399 containerd[1598]: time="2025-11-01T00:22:57.639346345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6pxnw,Uid:af3d61a3-e7b1-4aa9-b176-9c115261a212,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\"" Nov 1 00:22:57.640389 kubelet[2776]: E1101 00:22:57.640351 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:57.641243 kubelet[2776]: E1101 00:22:57.641221 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.641318 kubelet[2776]: W1101 00:22:57.641242 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.641318 kubelet[2776]: E1101 00:22:57.641264 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.641318 kubelet[2776]: I1101 00:22:57.641298 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8b2d7-16ad-4489-b82c-4e442c6904d5-registration-dir\") pod \"csi-node-driver-bwpmn\" (UID: \"9cb8b2d7-16ad-4489-b82c-4e442c6904d5\") " pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:22:57.641578 kubelet[2776]: E1101 00:22:57.641553 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.641578 kubelet[2776]: W1101 00:22:57.641570 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.641692 kubelet[2776]: E1101 00:22:57.641585 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.641692 kubelet[2776]: I1101 00:22:57.641606 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28xl7\" (UniqueName: \"kubernetes.io/projected/9cb8b2d7-16ad-4489-b82c-4e442c6904d5-kube-api-access-28xl7\") pod \"csi-node-driver-bwpmn\" (UID: \"9cb8b2d7-16ad-4489-b82c-4e442c6904d5\") " pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:22:57.641868 kubelet[2776]: E1101 00:22:57.641846 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.641868 kubelet[2776]: W1101 00:22:57.641866 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.642003 kubelet[2776]: E1101 00:22:57.641893 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.642003 kubelet[2776]: I1101 00:22:57.641956 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8b2d7-16ad-4489-b82c-4e442c6904d5-kubelet-dir\") pod \"csi-node-driver-bwpmn\" (UID: \"9cb8b2d7-16ad-4489-b82c-4e442c6904d5\") " pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:22:57.642332 kubelet[2776]: E1101 00:22:57.642313 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.642332 kubelet[2776]: W1101 00:22:57.642328 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.642436 kubelet[2776]: E1101 00:22:57.642350 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.642656 kubelet[2776]: E1101 00:22:57.642613 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.642656 kubelet[2776]: W1101 00:22:57.642626 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.642656 kubelet[2776]: E1101 00:22:57.642644 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.643145 kubelet[2776]: E1101 00:22:57.643127 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.643145 kubelet[2776]: W1101 00:22:57.643143 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.643253 kubelet[2776]: E1101 00:22:57.643164 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.643428 kubelet[2776]: E1101 00:22:57.643410 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.643428 kubelet[2776]: W1101 00:22:57.643426 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.643544 kubelet[2776]: E1101 00:22:57.643524 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.643869 kubelet[2776]: E1101 00:22:57.643849 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.643869 kubelet[2776]: W1101 00:22:57.643867 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.643993 kubelet[2776]: E1101 00:22:57.643923 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.644030 kubelet[2776]: I1101 00:22:57.644013 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9cb8b2d7-16ad-4489-b82c-4e442c6904d5-socket-dir\") pod \"csi-node-driver-bwpmn\" (UID: \"9cb8b2d7-16ad-4489-b82c-4e442c6904d5\") " pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:22:57.644200 kubelet[2776]: E1101 00:22:57.644181 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.644200 kubelet[2776]: W1101 00:22:57.644196 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.644261 kubelet[2776]: E1101 00:22:57.644216 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.644489 kubelet[2776]: E1101 00:22:57.644470 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.644489 kubelet[2776]: W1101 00:22:57.644484 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.644579 kubelet[2776]: E1101 00:22:57.644501 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.644770 kubelet[2776]: E1101 00:22:57.644751 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.644770 kubelet[2776]: W1101 00:22:57.644767 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.644999 kubelet[2776]: E1101 00:22:57.644800 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.645202 kubelet[2776]: E1101 00:22:57.645182 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.645202 kubelet[2776]: W1101 00:22:57.645195 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.645286 kubelet[2776]: E1101 00:22:57.645214 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.645443 kubelet[2776]: E1101 00:22:57.645423 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.645443 kubelet[2776]: W1101 00:22:57.645435 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.645519 kubelet[2776]: E1101 00:22:57.645446 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.645519 kubelet[2776]: I1101 00:22:57.645471 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9cb8b2d7-16ad-4489-b82c-4e442c6904d5-varrun\") pod \"csi-node-driver-bwpmn\" (UID: \"9cb8b2d7-16ad-4489-b82c-4e442c6904d5\") " pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:22:57.645726 kubelet[2776]: E1101 00:22:57.645706 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.645726 kubelet[2776]: W1101 00:22:57.645719 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.645796 kubelet[2776]: E1101 00:22:57.645730 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.645993 kubelet[2776]: E1101 00:22:57.645967 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.645993 kubelet[2776]: W1101 00:22:57.645980 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.645993 kubelet[2776]: E1101 00:22:57.645991 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.746288 kubelet[2776]: E1101 00:22:57.746240 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.746288 kubelet[2776]: W1101 00:22:57.746270 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.746288 kubelet[2776]: E1101 00:22:57.746296 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.746574 kubelet[2776]: E1101 00:22:57.746560 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.746574 kubelet[2776]: W1101 00:22:57.746570 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.746633 kubelet[2776]: E1101 00:22:57.746584 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.746923 kubelet[2776]: E1101 00:22:57.746902 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.746923 kubelet[2776]: W1101 00:22:57.746915 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.746989 kubelet[2776]: E1101 00:22:57.746957 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.747153 kubelet[2776]: E1101 00:22:57.747140 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.747153 kubelet[2776]: W1101 00:22:57.747150 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.747224 kubelet[2776]: E1101 00:22:57.747170 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.747488 kubelet[2776]: E1101 00:22:57.747455 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.747538 kubelet[2776]: W1101 00:22:57.747486 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.747538 kubelet[2776]: E1101 00:22:57.747519 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.747701 kubelet[2776]: E1101 00:22:57.747685 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.747701 kubelet[2776]: W1101 00:22:57.747697 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.747766 kubelet[2776]: E1101 00:22:57.747712 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.747959 kubelet[2776]: E1101 00:22:57.747939 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.747959 kubelet[2776]: W1101 00:22:57.747954 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.748080 kubelet[2776]: E1101 00:22:57.748021 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.748255 kubelet[2776]: E1101 00:22:57.748138 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.748255 kubelet[2776]: W1101 00:22:57.748147 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.748255 kubelet[2776]: E1101 00:22:57.748178 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.748335 kubelet[2776]: E1101 00:22:57.748313 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.748335 kubelet[2776]: W1101 00:22:57.748321 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.748377 kubelet[2776]: E1101 00:22:57.748352 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.748481 kubelet[2776]: E1101 00:22:57.748465 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.748504 kubelet[2776]: W1101 00:22:57.748480 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.748532 kubelet[2776]: E1101 00:22:57.748512 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.748689 kubelet[2776]: E1101 00:22:57.748669 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.748689 kubelet[2776]: W1101 00:22:57.748683 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.748768 kubelet[2776]: E1101 00:22:57.748724 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.748970 kubelet[2776]: E1101 00:22:57.748954 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.748970 kubelet[2776]: W1101 00:22:57.748966 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.749034 kubelet[2776]: E1101 00:22:57.748981 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.749190 kubelet[2776]: E1101 00:22:57.749175 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.749190 kubelet[2776]: W1101 00:22:57.749185 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.749249 kubelet[2776]: E1101 00:22:57.749201 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.749420 kubelet[2776]: E1101 00:22:57.749405 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.749420 kubelet[2776]: W1101 00:22:57.749418 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.749483 kubelet[2776]: E1101 00:22:57.749432 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.749701 kubelet[2776]: E1101 00:22:57.749677 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.749743 kubelet[2776]: W1101 00:22:57.749701 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.749743 kubelet[2776]: E1101 00:22:57.749731 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.749968 kubelet[2776]: E1101 00:22:57.749922 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.749968 kubelet[2776]: W1101 00:22:57.749963 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.750032 kubelet[2776]: E1101 00:22:57.749977 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.750203 kubelet[2776]: E1101 00:22:57.750188 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.750203 kubelet[2776]: W1101 00:22:57.750200 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.750256 kubelet[2776]: E1101 00:22:57.750233 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.750389 kubelet[2776]: E1101 00:22:57.750374 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.750389 kubelet[2776]: W1101 00:22:57.750386 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.750436 kubelet[2776]: E1101 00:22:57.750413 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.750575 kubelet[2776]: E1101 00:22:57.750560 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.750575 kubelet[2776]: W1101 00:22:57.750572 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.750620 kubelet[2776]: E1101 00:22:57.750599 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.750803 kubelet[2776]: E1101 00:22:57.750788 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.750803 kubelet[2776]: W1101 00:22:57.750799 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.750871 kubelet[2776]: E1101 00:22:57.750814 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.751051 kubelet[2776]: E1101 00:22:57.751035 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.751051 kubelet[2776]: W1101 00:22:57.751046 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.751106 kubelet[2776]: E1101 00:22:57.751060 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.751271 kubelet[2776]: E1101 00:22:57.751256 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.751271 kubelet[2776]: W1101 00:22:57.751267 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.751319 kubelet[2776]: E1101 00:22:57.751280 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.751489 kubelet[2776]: E1101 00:22:57.751473 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.751516 kubelet[2776]: W1101 00:22:57.751488 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.751516 kubelet[2776]: E1101 00:22:57.751509 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.751775 kubelet[2776]: E1101 00:22:57.751749 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.751775 kubelet[2776]: W1101 00:22:57.751761 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.751902 kubelet[2776]: E1101 00:22:57.751787 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.752100 kubelet[2776]: E1101 00:22:57.752070 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.752100 kubelet[2776]: W1101 00:22:57.752094 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.752173 kubelet[2776]: E1101 00:22:57.752109 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:57.761696 kubelet[2776]: E1101 00:22:57.761660 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:57.761696 kubelet[2776]: W1101 00:22:57.761687 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:57.761876 kubelet[2776]: E1101 00:22:57.761713 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.248951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3299720414.mount: Deactivated successfully. Nov 1 00:22:59.612187 containerd[1598]: time="2025-11-01T00:22:59.612099565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.613108 containerd[1598]: time="2025-11-01T00:22:59.613067575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Nov 1 00:22:59.614507 containerd[1598]: time="2025-11-01T00:22:59.614465591Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.616919 containerd[1598]: time="2025-11-01T00:22:59.616866814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:22:59.617532 containerd[1598]: time="2025-11-01T00:22:59.617452876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.120677569s" Nov 1 00:22:59.617532 containerd[1598]: time="2025-11-01T00:22:59.617503941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 1 00:22:59.618674 containerd[1598]: time="2025-11-01T00:22:59.618633024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 1 00:22:59.631310 containerd[1598]: time="2025-11-01T00:22:59.631250486Z" level=info msg="CreateContainer within sandbox \"448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 1 00:22:59.640751 containerd[1598]: time="2025-11-01T00:22:59.640678845Z" level=info msg="Container 68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:22:59.648754 containerd[1598]: time="2025-11-01T00:22:59.648689340Z" level=info msg="CreateContainer within sandbox \"448d30af93f99cdccadf3723b2686e4eff5122dbbfc5a002f8fe7e26692aed34\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b\"" Nov 1 00:22:59.649353 containerd[1598]: time="2025-11-01T00:22:59.649312972Z" level=info msg="StartContainer for \"68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b\"" Nov 1 00:22:59.650673 containerd[1598]: time="2025-11-01T00:22:59.650636209Z" level=info msg="connecting to shim 68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b" address="unix:///run/containerd/s/71dec5ca4d269079f5b6f88cd0c0df48afe0e8f9f4cc37fa97ef359e30599c8d" protocol=ttrpc version=3 Nov 1 00:22:59.681233 systemd[1]: Started cri-containerd-68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b.scope - libcontainer container 68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b. Nov 1 00:22:59.750246 containerd[1598]: time="2025-11-01T00:22:59.750191358Z" level=info msg="StartContainer for \"68c47204c20bea1cffb78ad47819632174da72e3d71c2c062c8fab742161c75b\" returns successfully" Nov 1 00:22:59.751160 kubelet[2776]: E1101 00:22:59.751077 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:22:59.861980 kubelet[2776]: E1101 00:22:59.861555 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:22:59.865443 kubelet[2776]: E1101 00:22:59.865231 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.865443 kubelet[2776]: W1101 00:22:59.865251 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.865443 kubelet[2776]: E1101 00:22:59.865273 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.865764 kubelet[2776]: E1101 00:22:59.865748 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.865979 kubelet[2776]: W1101 00:22:59.865847 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.865979 kubelet[2776]: E1101 00:22:59.865880 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.866563 kubelet[2776]: E1101 00:22:59.866485 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.867050 kubelet[2776]: W1101 00:22:59.866884 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.867050 kubelet[2776]: E1101 00:22:59.866999 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.867509 kubelet[2776]: E1101 00:22:59.867468 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.867509 kubelet[2776]: W1101 00:22:59.867481 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.867509 kubelet[2776]: E1101 00:22:59.867491 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.868041 kubelet[2776]: E1101 00:22:59.867970 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.868041 kubelet[2776]: W1101 00:22:59.867992 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.868041 kubelet[2776]: E1101 00:22:59.868003 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.868373 kubelet[2776]: E1101 00:22:59.868315 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.868373 kubelet[2776]: W1101 00:22:59.868327 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.868373 kubelet[2776]: E1101 00:22:59.868337 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.868704 kubelet[2776]: E1101 00:22:59.868646 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.868704 kubelet[2776]: W1101 00:22:59.868659 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.868704 kubelet[2776]: E1101 00:22:59.868668 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.869189 kubelet[2776]: E1101 00:22:59.869150 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.869189 kubelet[2776]: W1101 00:22:59.869162 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.869189 kubelet[2776]: E1101 00:22:59.869173 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.869639 kubelet[2776]: E1101 00:22:59.869562 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.869639 kubelet[2776]: W1101 00:22:59.869575 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.869639 kubelet[2776]: E1101 00:22:59.869584 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.870047 kubelet[2776]: E1101 00:22:59.869975 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.870047 kubelet[2776]: W1101 00:22:59.869987 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.870047 kubelet[2776]: E1101 00:22:59.869997 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.870531 kubelet[2776]: E1101 00:22:59.870456 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.870531 kubelet[2776]: W1101 00:22:59.870480 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.870531 kubelet[2776]: E1101 00:22:59.870491 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.870913 kubelet[2776]: E1101 00:22:59.870889 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.871063 kubelet[2776]: W1101 00:22:59.870974 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.871063 kubelet[2776]: E1101 00:22:59.870989 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.871624 kubelet[2776]: E1101 00:22:59.871511 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.871624 kubelet[2776]: W1101 00:22:59.871537 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.871624 kubelet[2776]: E1101 00:22:59.871550 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.872318 kubelet[2776]: E1101 00:22:59.872197 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.872318 kubelet[2776]: W1101 00:22:59.872237 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.872318 kubelet[2776]: E1101 00:22:59.872250 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.873087 kubelet[2776]: E1101 00:22:59.873071 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.873520 kubelet[2776]: W1101 00:22:59.873188 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.873600 kubelet[2776]: E1101 00:22:59.873586 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.875107 kubelet[2776]: E1101 00:22:59.875090 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.875199 kubelet[2776]: W1101 00:22:59.875182 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.875270 kubelet[2776]: E1101 00:22:59.875257 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.875627 kubelet[2776]: E1101 00:22:59.875547 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.875627 kubelet[2776]: W1101 00:22:59.875559 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.875627 kubelet[2776]: E1101 00:22:59.875576 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.876032 kubelet[2776]: E1101 00:22:59.875944 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.876032 kubelet[2776]: W1101 00:22:59.875967 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.876032 kubelet[2776]: E1101 00:22:59.875985 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.876813 kubelet[2776]: E1101 00:22:59.876772 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.876896 kubelet[2776]: W1101 00:22:59.876812 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.877409 kubelet[2776]: E1101 00:22:59.877379 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.877409 kubelet[2776]: W1101 00:22:59.877394 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.877554 kubelet[2776]: E1101 00:22:59.877504 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.877554 kubelet[2776]: E1101 00:22:59.877520 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.877830 kubelet[2776]: E1101 00:22:59.877804 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.877830 kubelet[2776]: W1101 00:22:59.877816 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.878021 kubelet[2776]: E1101 00:22:59.878008 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.878268 kubelet[2776]: E1101 00:22:59.878243 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.878268 kubelet[2776]: W1101 00:22:59.878254 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.878426 kubelet[2776]: E1101 00:22:59.878377 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.883440 kubelet[2776]: E1101 00:22:59.883414 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.883624 kubelet[2776]: W1101 00:22:59.883537 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.883712 kubelet[2776]: E1101 00:22:59.883674 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.883997 kubelet[2776]: E1101 00:22:59.883983 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.884427 kubelet[2776]: W1101 00:22:59.884412 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.884596 kubelet[2776]: E1101 00:22:59.884566 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.884701 kubelet[2776]: E1101 00:22:59.884689 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.884751 kubelet[2776]: W1101 00:22:59.884740 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.884898 kubelet[2776]: E1101 00:22:59.884874 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.885808 kubelet[2776]: I1101 00:22:59.885493 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-574965fb8f-5p8rb" podStartSLOduration=0.763533178 podStartE2EDuration="2.885479902s" podCreationTimestamp="2025-11-01 00:22:57 +0000 UTC" firstStartedPulling="2025-11-01 00:22:57.496413116 +0000 UTC m=+20.980569864" lastFinishedPulling="2025-11-01 00:22:59.61835983 +0000 UTC m=+23.102516588" observedRunningTime="2025-11-01 00:22:59.884393561 +0000 UTC m=+23.368550309" watchObservedRunningTime="2025-11-01 00:22:59.885479902 +0000 UTC m=+23.369636640" Nov 1 00:22:59.885958 kubelet[2776]: E1101 00:22:59.885945 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.886022 kubelet[2776]: W1101 00:22:59.886008 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.886300 kubelet[2776]: E1101 00:22:59.886285 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.886750 kubelet[2776]: E1101 00:22:59.886641 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.886750 kubelet[2776]: W1101 00:22:59.886652 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.886849 kubelet[2776]: E1101 00:22:59.886834 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.886978 kubelet[2776]: E1101 00:22:59.886966 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.887069 kubelet[2776]: W1101 00:22:59.887028 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.887217 kubelet[2776]: E1101 00:22:59.887204 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.888183 kubelet[2776]: E1101 00:22:59.888050 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.888183 kubelet[2776]: W1101 00:22:59.888062 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.888183 kubelet[2776]: E1101 00:22:59.888073 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.888867 kubelet[2776]: E1101 00:22:59.888830 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.888867 kubelet[2776]: W1101 00:22:59.888843 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.889022 kubelet[2776]: E1101 00:22:59.888970 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.889295 kubelet[2776]: E1101 00:22:59.889269 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.889295 kubelet[2776]: W1101 00:22:59.889281 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.889589 kubelet[2776]: E1101 00:22:59.889423 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.889916 kubelet[2776]: E1101 00:22:59.889903 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.890031 kubelet[2776]: W1101 00:22:59.890018 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.890101 kubelet[2776]: E1101 00:22:59.890090 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:22:59.890376 kubelet[2776]: E1101 00:22:59.890336 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:22:59.890376 kubelet[2776]: W1101 00:22:59.890348 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:22:59.890376 kubelet[2776]: E1101 00:22:59.890358 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.863070 kubelet[2776]: I1101 00:23:00.862828 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:00.863726 kubelet[2776]: E1101 00:23:00.863314 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:00.878561 kubelet[2776]: E1101 00:23:00.878479 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.878561 kubelet[2776]: W1101 00:23:00.878517 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.878561 kubelet[2776]: E1101 00:23:00.878548 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.880076 kubelet[2776]: E1101 00:23:00.880035 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.880076 kubelet[2776]: W1101 00:23:00.880071 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.880170 kubelet[2776]: E1101 00:23:00.880101 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.880461 kubelet[2776]: E1101 00:23:00.880430 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.880461 kubelet[2776]: W1101 00:23:00.880444 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.880461 kubelet[2776]: E1101 00:23:00.880454 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.880780 kubelet[2776]: E1101 00:23:00.880721 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.880780 kubelet[2776]: W1101 00:23:00.880733 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.880780 kubelet[2776]: E1101 00:23:00.880743 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.881020 kubelet[2776]: E1101 00:23:00.880994 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.881020 kubelet[2776]: W1101 00:23:00.881016 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.881125 kubelet[2776]: E1101 00:23:00.881027 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.881325 kubelet[2776]: E1101 00:23:00.881288 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.881325 kubelet[2776]: W1101 00:23:00.881314 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.881400 kubelet[2776]: E1101 00:23:00.881339 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.881656 kubelet[2776]: E1101 00:23:00.881634 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.881656 kubelet[2776]: W1101 00:23:00.881648 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.881734 kubelet[2776]: E1101 00:23:00.881660 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.882015 kubelet[2776]: E1101 00:23:00.881993 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.882015 kubelet[2776]: W1101 00:23:00.882007 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.882097 kubelet[2776]: E1101 00:23:00.882019 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.882311 kubelet[2776]: E1101 00:23:00.882280 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.882311 kubelet[2776]: W1101 00:23:00.882297 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.882386 kubelet[2776]: E1101 00:23:00.882311 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.882585 kubelet[2776]: E1101 00:23:00.882553 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.882585 kubelet[2776]: W1101 00:23:00.882565 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.882585 kubelet[2776]: E1101 00:23:00.882576 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.882781 kubelet[2776]: E1101 00:23:00.882764 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.882781 kubelet[2776]: W1101 00:23:00.882776 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.882962 kubelet[2776]: E1101 00:23:00.882787 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.883026 kubelet[2776]: E1101 00:23:00.882985 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.883026 kubelet[2776]: W1101 00:23:00.882995 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.883026 kubelet[2776]: E1101 00:23:00.883006 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.883286 kubelet[2776]: E1101 00:23:00.883249 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.883286 kubelet[2776]: W1101 00:23:00.883268 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.883286 kubelet[2776]: E1101 00:23:00.883283 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.883596 kubelet[2776]: E1101 00:23:00.883580 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.883596 kubelet[2776]: W1101 00:23:00.883594 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.883666 kubelet[2776]: E1101 00:23:00.883606 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.884070 kubelet[2776]: E1101 00:23:00.884056 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.884070 kubelet[2776]: W1101 00:23:00.884068 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.884159 kubelet[2776]: E1101 00:23:00.884081 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.887060 kubelet[2776]: E1101 00:23:00.887037 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.887060 kubelet[2776]: W1101 00:23:00.887056 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.887130 kubelet[2776]: E1101 00:23:00.887072 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.887523 kubelet[2776]: E1101 00:23:00.887504 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.887523 kubelet[2776]: W1101 00:23:00.887522 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.887596 kubelet[2776]: E1101 00:23:00.887583 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.887863 kubelet[2776]: E1101 00:23:00.887847 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.887898 kubelet[2776]: W1101 00:23:00.887863 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.889636 kubelet[2776]: E1101 00:23:00.889607 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.890075 kubelet[2776]: E1101 00:23:00.890035 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.890123 kubelet[2776]: W1101 00:23:00.890078 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.890286 kubelet[2776]: E1101 00:23:00.890214 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.890440 kubelet[2776]: E1101 00:23:00.890420 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.890470 kubelet[2776]: W1101 00:23:00.890445 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.890563 kubelet[2776]: E1101 00:23:00.890524 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.890739 kubelet[2776]: E1101 00:23:00.890709 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.890739 kubelet[2776]: W1101 00:23:00.890724 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.890848 kubelet[2776]: E1101 00:23:00.890742 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.891015 kubelet[2776]: E1101 00:23:00.890994 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.891015 kubelet[2776]: W1101 00:23:00.891009 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.891100 kubelet[2776]: E1101 00:23:00.891029 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.891294 kubelet[2776]: E1101 00:23:00.891254 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.891294 kubelet[2776]: W1101 00:23:00.891267 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.891294 kubelet[2776]: E1101 00:23:00.891288 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.891552 kubelet[2776]: E1101 00:23:00.891534 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.891616 kubelet[2776]: W1101 00:23:00.891562 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.891616 kubelet[2776]: E1101 00:23:00.891577 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.891893 kubelet[2776]: E1101 00:23:00.891873 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.891893 kubelet[2776]: W1101 00:23:00.891887 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.892015 kubelet[2776]: E1101 00:23:00.891993 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.892195 kubelet[2776]: E1101 00:23:00.892172 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.892195 kubelet[2776]: W1101 00:23:00.892192 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.892285 kubelet[2776]: E1101 00:23:00.892236 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.892410 kubelet[2776]: E1101 00:23:00.892390 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.892410 kubelet[2776]: W1101 00:23:00.892402 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.892479 kubelet[2776]: E1101 00:23:00.892420 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.892643 kubelet[2776]: E1101 00:23:00.892624 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.892643 kubelet[2776]: W1101 00:23:00.892636 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.892714 kubelet[2776]: E1101 00:23:00.892653 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.892940 kubelet[2776]: E1101 00:23:00.892907 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.892940 kubelet[2776]: W1101 00:23:00.892920 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.893017 kubelet[2776]: E1101 00:23:00.892975 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.893287 kubelet[2776]: E1101 00:23:00.893267 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.893287 kubelet[2776]: W1101 00:23:00.893282 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.893358 kubelet[2776]: E1101 00:23:00.893300 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.893616 kubelet[2776]: E1101 00:23:00.893588 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.893616 kubelet[2776]: W1101 00:23:00.893609 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.893697 kubelet[2776]: E1101 00:23:00.893631 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.893893 kubelet[2776]: E1101 00:23:00.893871 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.893893 kubelet[2776]: W1101 00:23:00.893886 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.894024 kubelet[2776]: E1101 00:23:00.893904 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:00.894180 kubelet[2776]: E1101 00:23:00.894160 2776 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 1 00:23:00.894180 kubelet[2776]: W1101 00:23:00.894176 2776 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 1 00:23:00.894257 kubelet[2776]: E1101 00:23:00.894189 2776 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 1 00:23:01.670135 containerd[1598]: time="2025-11-01T00:23:01.670064628Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.681510 containerd[1598]: time="2025-11-01T00:23:01.681472381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Nov 1 00:23:01.722653 containerd[1598]: time="2025-11-01T00:23:01.722589746Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.750587 kubelet[2776]: E1101 00:23:01.750503 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:01.775035 containerd[1598]: time="2025-11-01T00:23:01.774918855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:01.775969 containerd[1598]: time="2025-11-01T00:23:01.775886493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.157217712s" Nov 1 00:23:01.776031 containerd[1598]: time="2025-11-01T00:23:01.775973466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 1 00:23:01.779248 containerd[1598]: time="2025-11-01T00:23:01.779178679Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 1 00:23:01.974966 containerd[1598]: time="2025-11-01T00:23:01.974782281Z" level=info msg="Container 265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:23:02.150396 containerd[1598]: time="2025-11-01T00:23:02.150329359Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\"" Nov 1 00:23:02.151223 containerd[1598]: time="2025-11-01T00:23:02.151149790Z" level=info msg="StartContainer for \"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\"" Nov 1 00:23:02.152903 containerd[1598]: time="2025-11-01T00:23:02.152863870Z" level=info msg="connecting to shim 265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f" address="unix:///run/containerd/s/76750c96fb5f231c51ec3fe1d59468a16a3e71b424b73caaa8960bd73b68cb14" protocol=ttrpc version=3 Nov 1 00:23:02.179320 systemd[1]: Started cri-containerd-265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f.scope - libcontainer container 265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f. Nov 1 00:23:02.252876 systemd[1]: cri-containerd-265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f.scope: Deactivated successfully. Nov 1 00:23:02.254175 systemd[1]: cri-containerd-265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f.scope: Consumed 48ms CPU time, 6.4M memory peak, 4.6M written to disk. Nov 1 00:23:02.256776 containerd[1598]: time="2025-11-01T00:23:02.256727313Z" level=info msg="TaskExit event in podsandbox handler container_id:\"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\" id:\"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\" pid:3563 exited_at:{seconds:1761956582 nanos:256170477}" Nov 1 00:23:02.434493 containerd[1598]: time="2025-11-01T00:23:02.434400387Z" level=info msg="received exit event container_id:\"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\" id:\"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\" pid:3563 exited_at:{seconds:1761956582 nanos:256170477}" Nov 1 00:23:02.445887 containerd[1598]: time="2025-11-01T00:23:02.445837895Z" level=info msg="StartContainer for \"265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f\" returns successfully" Nov 1 00:23:02.463550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-265f129879db0c94bf27e5e9dc08a7c13e97c5169fe7b8218c01e415fb0d5f9f-rootfs.mount: Deactivated successfully. Nov 1 00:23:02.870424 kubelet[2776]: E1101 00:23:02.870380 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:02.871956 containerd[1598]: time="2025-11-01T00:23:02.871880048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 1 00:23:03.750225 kubelet[2776]: E1101 00:23:03.750132 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:05.547504 containerd[1598]: time="2025-11-01T00:23:05.547431314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:05.548477 containerd[1598]: time="2025-11-01T00:23:05.548413158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 1 00:23:05.549674 containerd[1598]: time="2025-11-01T00:23:05.549633179Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:05.552341 containerd[1598]: time="2025-11-01T00:23:05.552242410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:05.552835 containerd[1598]: time="2025-11-01T00:23:05.552788295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 2.68081904s" Nov 1 00:23:05.552835 containerd[1598]: time="2025-11-01T00:23:05.552822540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 1 00:23:05.555891 containerd[1598]: time="2025-11-01T00:23:05.555847190Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 1 00:23:05.568654 containerd[1598]: time="2025-11-01T00:23:05.568582668Z" level=info msg="Container 8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:23:05.581528 containerd[1598]: time="2025-11-01T00:23:05.581470072Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\"" Nov 1 00:23:05.582260 containerd[1598]: time="2025-11-01T00:23:05.582183642Z" level=info msg="StartContainer for \"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\"" Nov 1 00:23:05.584172 containerd[1598]: time="2025-11-01T00:23:05.584132081Z" level=info msg="connecting to shim 8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9" address="unix:///run/containerd/s/76750c96fb5f231c51ec3fe1d59468a16a3e71b424b73caaa8960bd73b68cb14" protocol=ttrpc version=3 Nov 1 00:23:05.614107 systemd[1]: Started cri-containerd-8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9.scope - libcontainer container 8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9. Nov 1 00:23:05.663172 containerd[1598]: time="2025-11-01T00:23:05.663102623Z" level=info msg="StartContainer for \"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\" returns successfully" Nov 1 00:23:05.750561 kubelet[2776]: E1101 00:23:05.750217 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:05.880306 kubelet[2776]: E1101 00:23:05.880139 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:06.882094 kubelet[2776]: E1101 00:23:06.882058 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:07.750601 kubelet[2776]: E1101 00:23:07.750525 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:07.789258 systemd[1]: cri-containerd-8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9.scope: Deactivated successfully. Nov 1 00:23:07.790769 containerd[1598]: time="2025-11-01T00:23:07.789983072Z" level=info msg="received exit event container_id:\"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\" id:\"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\" pid:3622 exited_at:{seconds:1761956587 nanos:789738081}" Nov 1 00:23:07.790769 containerd[1598]: time="2025-11-01T00:23:07.790243361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\" id:\"8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9\" pid:3622 exited_at:{seconds:1761956587 nanos:789738081}" Nov 1 00:23:07.789653 systemd[1]: cri-containerd-8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9.scope: Consumed 644ms CPU time, 176.4M memory peak, 3.4M read from disk, 171.3M written to disk. Nov 1 00:23:07.819409 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8be88deaa5a3231585b28306db4ad3d9dadcad789cc0c4ee7339f9f47529f9e9-rootfs.mount: Deactivated successfully. Nov 1 00:23:07.888798 kubelet[2776]: I1101 00:23:07.888766 2776 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 1 00:23:08.201765 systemd[1]: Created slice kubepods-burstable-podbdbd4b4d_c838_47d2_b2da_3a95b5735d83.slice - libcontainer container kubepods-burstable-podbdbd4b4d_c838_47d2_b2da_3a95b5735d83.slice. Nov 1 00:23:08.215821 systemd[1]: Created slice kubepods-besteffort-podea34f150_dd20_4f23_a1df_b723d0fd4094.slice - libcontainer container kubepods-besteffort-podea34f150_dd20_4f23_a1df_b723d0fd4094.slice. Nov 1 00:23:08.226471 systemd[1]: Created slice kubepods-besteffort-podf1b660e9_a196_4e94_8db8_ec0d5d3642c8.slice - libcontainer container kubepods-besteffort-podf1b660e9_a196_4e94_8db8_ec0d5d3642c8.slice. Nov 1 00:23:08.234181 systemd[1]: Created slice kubepods-burstable-pod34075b49_4ccc_4510_a747_480fc74d94d8.slice - libcontainer container kubepods-burstable-pod34075b49_4ccc_4510_a747_480fc74d94d8.slice. Nov 1 00:23:08.234389 kubelet[2776]: I1101 00:23:08.234346 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpht2\" (UniqueName: \"kubernetes.io/projected/80ec35e2-7ac0-4d9e-82fe-2398651b9031-kube-api-access-gpht2\") pod \"goldmane-666569f655-9jkmd\" (UID: \"80ec35e2-7ac0-4d9e-82fe-2398651b9031\") " pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:08.234530 kubelet[2776]: I1101 00:23:08.234392 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf-calico-apiserver-certs\") pod \"calico-apiserver-65dff998bf-bf7v4\" (UID: \"0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf\") " pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" Nov 1 00:23:08.234530 kubelet[2776]: I1101 00:23:08.234416 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e740e5a-3e3f-487f-be71-f50848ddb11c-calico-apiserver-certs\") pod \"calico-apiserver-65dff998bf-kplcg\" (UID: \"8e740e5a-3e3f-487f-be71-f50848ddb11c\") " pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" Nov 1 00:23:08.234530 kubelet[2776]: I1101 00:23:08.234437 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gq2x\" (UniqueName: \"kubernetes.io/projected/f1b660e9-a196-4e94-8db8-ec0d5d3642c8-kube-api-access-2gq2x\") pod \"calico-kube-controllers-7db858884d-rlxtg\" (UID: \"f1b660e9-a196-4e94-8db8-ec0d5d3642c8\") " pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:08.234530 kubelet[2776]: I1101 00:23:08.234460 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ec35e2-7ac0-4d9e-82fe-2398651b9031-config\") pod \"goldmane-666569f655-9jkmd\" (UID: \"80ec35e2-7ac0-4d9e-82fe-2398651b9031\") " pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:08.234530 kubelet[2776]: I1101 00:23:08.234480 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-backend-key-pair\") pod \"whisker-7fd75ddc46-x7sjf\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " pod="calico-system/whisker-7fd75ddc46-x7sjf" Nov 1 00:23:08.234667 kubelet[2776]: I1101 00:23:08.234498 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/34075b49-4ccc-4510-a747-480fc74d94d8-config-volume\") pod \"coredns-668d6bf9bc-s78vh\" (UID: \"34075b49-4ccc-4510-a747-480fc74d94d8\") " pod="kube-system/coredns-668d6bf9bc-s78vh" Nov 1 00:23:08.234667 kubelet[2776]: I1101 00:23:08.234516 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/80ec35e2-7ac0-4d9e-82fe-2398651b9031-goldmane-key-pair\") pod \"goldmane-666569f655-9jkmd\" (UID: \"80ec35e2-7ac0-4d9e-82fe-2398651b9031\") " pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:08.234667 kubelet[2776]: I1101 00:23:08.234533 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bftnh\" (UniqueName: \"kubernetes.io/projected/ea34f150-dd20-4f23-a1df-b723d0fd4094-kube-api-access-bftnh\") pod \"whisker-7fd75ddc46-x7sjf\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " pod="calico-system/whisker-7fd75ddc46-x7sjf" Nov 1 00:23:08.234667 kubelet[2776]: I1101 00:23:08.234555 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqmjf\" (UniqueName: \"kubernetes.io/projected/0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf-kube-api-access-jqmjf\") pod \"calico-apiserver-65dff998bf-bf7v4\" (UID: \"0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf\") " pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" Nov 1 00:23:08.234667 kubelet[2776]: I1101 00:23:08.234573 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7thn\" (UniqueName: \"kubernetes.io/projected/34075b49-4ccc-4510-a747-480fc74d94d8-kube-api-access-x7thn\") pod \"coredns-668d6bf9bc-s78vh\" (UID: \"34075b49-4ccc-4510-a747-480fc74d94d8\") " pod="kube-system/coredns-668d6bf9bc-s78vh" Nov 1 00:23:08.234825 kubelet[2776]: I1101 00:23:08.234596 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-ca-bundle\") pod \"whisker-7fd75ddc46-x7sjf\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " pod="calico-system/whisker-7fd75ddc46-x7sjf" Nov 1 00:23:08.234825 kubelet[2776]: I1101 00:23:08.234617 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1b660e9-a196-4e94-8db8-ec0d5d3642c8-tigera-ca-bundle\") pod \"calico-kube-controllers-7db858884d-rlxtg\" (UID: \"f1b660e9-a196-4e94-8db8-ec0d5d3642c8\") " pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:08.234825 kubelet[2776]: I1101 00:23:08.234637 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80ec35e2-7ac0-4d9e-82fe-2398651b9031-goldmane-ca-bundle\") pod \"goldmane-666569f655-9jkmd\" (UID: \"80ec35e2-7ac0-4d9e-82fe-2398651b9031\") " pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:08.234825 kubelet[2776]: I1101 00:23:08.234656 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7fq2\" (UniqueName: \"kubernetes.io/projected/bdbd4b4d-c838-47d2-b2da-3a95b5735d83-kube-api-access-r7fq2\") pod \"coredns-668d6bf9bc-kqpwg\" (UID: \"bdbd4b4d-c838-47d2-b2da-3a95b5735d83\") " pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:08.234825 kubelet[2776]: I1101 00:23:08.234687 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdbd4b4d-c838-47d2-b2da-3a95b5735d83-config-volume\") pod \"coredns-668d6bf9bc-kqpwg\" (UID: \"bdbd4b4d-c838-47d2-b2da-3a95b5735d83\") " pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:08.234998 kubelet[2776]: I1101 00:23:08.234708 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mhhj\" (UniqueName: \"kubernetes.io/projected/8e740e5a-3e3f-487f-be71-f50848ddb11c-kube-api-access-5mhhj\") pod \"calico-apiserver-65dff998bf-kplcg\" (UID: \"8e740e5a-3e3f-487f-be71-f50848ddb11c\") " pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" Nov 1 00:23:08.243619 systemd[1]: Created slice kubepods-besteffort-pod0bca4d2d_6dfb_4f38_ab3b_dd64e533f1bf.slice - libcontainer container kubepods-besteffort-pod0bca4d2d_6dfb_4f38_ab3b_dd64e533f1bf.slice. Nov 1 00:23:08.249298 systemd[1]: Created slice kubepods-besteffort-pod80ec35e2_7ac0_4d9e_82fe_2398651b9031.slice - libcontainer container kubepods-besteffort-pod80ec35e2_7ac0_4d9e_82fe_2398651b9031.slice. Nov 1 00:23:08.256483 systemd[1]: Created slice kubepods-besteffort-pod8e740e5a_3e3f_487f_be71_f50848ddb11c.slice - libcontainer container kubepods-besteffort-pod8e740e5a_3e3f_487f_be71_f50848ddb11c.slice. Nov 1 00:23:08.515374 kubelet[2776]: E1101 00:23:08.515159 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:08.516506 containerd[1598]: time="2025-11-01T00:23:08.516435213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:08.522189 containerd[1598]: time="2025-11-01T00:23:08.522144462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd75ddc46-x7sjf,Uid:ea34f150-dd20-4f23-a1df-b723d0fd4094,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:08.531092 containerd[1598]: time="2025-11-01T00:23:08.531049242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:08.537904 kubelet[2776]: E1101 00:23:08.537866 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:08.538338 containerd[1598]: time="2025-11-01T00:23:08.538298433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s78vh,Uid:34075b49-4ccc-4510-a747-480fc74d94d8,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:08.547619 containerd[1598]: time="2025-11-01T00:23:08.547566124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-bf7v4,Uid:0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:08.555775 containerd[1598]: time="2025-11-01T00:23:08.555721356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9jkmd,Uid:80ec35e2-7ac0-4d9e-82fe-2398651b9031,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:08.562591 containerd[1598]: time="2025-11-01T00:23:08.562533898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-kplcg,Uid:8e740e5a-3e3f-487f-be71-f50848ddb11c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:08.896018 kubelet[2776]: E1101 00:23:08.893323 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:08.898187 containerd[1598]: time="2025-11-01T00:23:08.894669813Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 1 00:23:08.967620 containerd[1598]: time="2025-11-01T00:23:08.967432582Z" level=error msg="Failed to destroy network for sandbox \"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:08.970021 systemd[1]: run-netns-cni\x2dfb830262\x2d99dd\x2d9c24\x2d2e30\x2d259fd0bb736d.mount: Deactivated successfully. Nov 1 00:23:09.078473 containerd[1598]: time="2025-11-01T00:23:09.078403559Z" level=error msg="Failed to destroy network for sandbox \"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.081382 systemd[1]: run-netns-cni\x2d5c88da21\x2d67f7\x2d8828\x2dd8df\x2d91e28573d3c9.mount: Deactivated successfully. Nov 1 00:23:09.144301 containerd[1598]: time="2025-11-01T00:23:09.144237991Z" level=error msg="Failed to destroy network for sandbox \"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.238518 containerd[1598]: time="2025-11-01T00:23:09.238376702Z" level=error msg="Failed to destroy network for sandbox \"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.258080 containerd[1598]: time="2025-11-01T00:23:09.257496388Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.258452 kubelet[2776]: E1101 00:23:09.258412 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.259701 containerd[1598]: time="2025-11-01T00:23:09.258532833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd75ddc46-x7sjf,Uid:ea34f150-dd20-4f23-a1df-b723d0fd4094,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.259825 kubelet[2776]: E1101 00:23:09.258571 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:09.259825 kubelet[2776]: E1101 00:23:09.258602 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:09.259825 kubelet[2776]: E1101 00:23:09.258671 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kqpwg_kube-system(bdbd4b4d-c838-47d2-b2da-3a95b5735d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kqpwg_kube-system(bdbd4b4d-c838-47d2-b2da-3a95b5735d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1a6d3ba9455dda00eca78cc9193eb64bd38e2ca091a1ad7a9ca324cf1695770\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kqpwg" podUID="bdbd4b4d-c838-47d2-b2da-3a95b5735d83" Nov 1 00:23:09.260008 kubelet[2776]: E1101 00:23:09.258842 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.260008 kubelet[2776]: E1101 00:23:09.258877 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fd75ddc46-x7sjf" Nov 1 00:23:09.260008 kubelet[2776]: E1101 00:23:09.258905 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7fd75ddc46-x7sjf" Nov 1 00:23:09.260088 kubelet[2776]: E1101 00:23:09.259517 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7fd75ddc46-x7sjf_calico-system(ea34f150-dd20-4f23-a1df-b723d0fd4094)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7fd75ddc46-x7sjf_calico-system(ea34f150-dd20-4f23-a1df-b723d0fd4094)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d000c75fb0d08a5a40fa8ecfb8af71dab7a83c60fb07fc4b578604863b56b08\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7fd75ddc46-x7sjf" podUID="ea34f150-dd20-4f23-a1df-b723d0fd4094" Nov 1 00:23:09.261482 containerd[1598]: time="2025-11-01T00:23:09.260330358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.261586 kubelet[2776]: E1101 00:23:09.260678 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.261586 kubelet[2776]: E1101 00:23:09.260825 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:09.261586 kubelet[2776]: E1101 00:23:09.260850 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:09.261718 kubelet[2776]: E1101 00:23:09.260898 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9feda099e79738dd964590e6e8a63c1f404f71898bf46b6e34e8eee56edd0d86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:09.261785 containerd[1598]: time="2025-11-01T00:23:09.261690983Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s78vh,Uid:34075b49-4ccc-4510-a747-480fc74d94d8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.262180 kubelet[2776]: E1101 00:23:09.262086 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.262334 kubelet[2776]: E1101 00:23:09.262229 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s78vh" Nov 1 00:23:09.262334 kubelet[2776]: E1101 00:23:09.262270 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-s78vh" Nov 1 00:23:09.262410 kubelet[2776]: E1101 00:23:09.262361 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-s78vh_kube-system(34075b49-4ccc-4510-a747-480fc74d94d8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-s78vh_kube-system(34075b49-4ccc-4510-a747-480fc74d94d8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ad1bf2948682792591a51ce37978c1abb8fdb97faa6e79e19fff0181b0b50ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-s78vh" podUID="34075b49-4ccc-4510-a747-480fc74d94d8" Nov 1 00:23:09.315070 containerd[1598]: time="2025-11-01T00:23:09.315006861Z" level=error msg="Failed to destroy network for sandbox \"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.316949 containerd[1598]: time="2025-11-01T00:23:09.316840574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9jkmd,Uid:80ec35e2-7ac0-4d9e-82fe-2398651b9031,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.317725 kubelet[2776]: E1101 00:23:09.317233 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.317725 kubelet[2776]: E1101 00:23:09.317302 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:09.317725 kubelet[2776]: E1101 00:23:09.317334 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9jkmd" Nov 1 00:23:09.317989 kubelet[2776]: E1101 00:23:09.317382 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9jkmd_calico-system(80ec35e2-7ac0-4d9e-82fe-2398651b9031)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9jkmd_calico-system(80ec35e2-7ac0-4d9e-82fe-2398651b9031)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d88830bfa30936136234a6d57f68a89b76595caac5e56cbafde435d40b93277d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:23:09.320552 containerd[1598]: time="2025-11-01T00:23:09.320504993Z" level=error msg="Failed to destroy network for sandbox \"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.322574 containerd[1598]: time="2025-11-01T00:23:09.322529515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-kplcg,Uid:8e740e5a-3e3f-487f-be71-f50848ddb11c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.322887 kubelet[2776]: E1101 00:23:09.322843 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.323181 kubelet[2776]: E1101 00:23:09.322911 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" Nov 1 00:23:09.323181 kubelet[2776]: E1101 00:23:09.322958 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" Nov 1 00:23:09.323181 kubelet[2776]: E1101 00:23:09.323033 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dff998bf-kplcg_calico-apiserver(8e740e5a-3e3f-487f-be71-f50848ddb11c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dff998bf-kplcg_calico-apiserver(8e740e5a-3e3f-487f-be71-f50848ddb11c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c89c1a6e7c472cad95c48e8c68d3e67de53fcc20a9c32a99f385a70d282f491\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:23:09.326096 containerd[1598]: time="2025-11-01T00:23:09.326059011Z" level=error msg="Failed to destroy network for sandbox \"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.327710 containerd[1598]: time="2025-11-01T00:23:09.327660577Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-bf7v4,Uid:0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.327924 kubelet[2776]: E1101 00:23:09.327877 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.328048 kubelet[2776]: E1101 00:23:09.327967 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" Nov 1 00:23:09.328048 kubelet[2776]: E1101 00:23:09.327992 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" Nov 1 00:23:09.328164 kubelet[2776]: E1101 00:23:09.328058 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dff998bf-bf7v4_calico-apiserver(0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dff998bf-bf7v4_calico-apiserver(0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b4917477e5bddf8e73e2601c9c6bb36c800c13c9daa1d72d69affc3c81754ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:23:09.758449 systemd[1]: Created slice kubepods-besteffort-pod9cb8b2d7_16ad_4489_b82c_4e442c6904d5.slice - libcontainer container kubepods-besteffort-pod9cb8b2d7_16ad_4489_b82c_4e442c6904d5.slice. Nov 1 00:23:09.761098 containerd[1598]: time="2025-11-01T00:23:09.761057269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwpmn,Uid:9cb8b2d7-16ad-4489-b82c-4e442c6904d5,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:09.812919 containerd[1598]: time="2025-11-01T00:23:09.812837634Z" level=error msg="Failed to destroy network for sandbox \"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.814388 containerd[1598]: time="2025-11-01T00:23:09.814350435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwpmn,Uid:9cb8b2d7-16ad-4489-b82c-4e442c6904d5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.814686 kubelet[2776]: E1101 00:23:09.814623 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:09.814749 kubelet[2776]: E1101 00:23:09.814719 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:23:09.814779 kubelet[2776]: E1101 00:23:09.814749 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bwpmn" Nov 1 00:23:09.814870 kubelet[2776]: E1101 00:23:09.814813 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06f77f3c63d23e344d5acc7d341b7ddd9c65455cd7770c23ec46650d022b49be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:09.820500 systemd[1]: run-netns-cni\x2d68472626\x2da5ad\x2d43c7\x2d3331\x2deb4f2d4f0369.mount: Deactivated successfully. Nov 1 00:23:09.820616 systemd[1]: run-netns-cni\x2d7a955e43\x2d9e55\x2d8b34\x2d6c4d\x2d5e93aafefa89.mount: Deactivated successfully. Nov 1 00:23:09.820702 systemd[1]: run-netns-cni\x2da28cf821\x2dc964\x2d7ae4\x2d821a\x2d5fd01923ba4e.mount: Deactivated successfully. Nov 1 00:23:09.820770 systemd[1]: run-netns-cni\x2d86edc380\x2d792b\x2df6bf\x2d795b\x2d3b8dedf65472.mount: Deactivated successfully. Nov 1 00:23:09.820852 systemd[1]: run-netns-cni\x2d82123b03\x2d8a40\x2ddf5e\x2d5d6e\x2d5efeb1b177d1.mount: Deactivated successfully. Nov 1 00:23:15.531388 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:49582.service - OpenSSH per-connection server daemon (10.0.0.1:49582). Nov 1 00:23:15.600344 sshd[3928]: Accepted publickey for core from 10.0.0.1 port 49582 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:15.601880 sshd-session[3928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:15.606832 systemd-logind[1575]: New session 10 of user core. Nov 1 00:23:15.614068 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 1 00:23:15.832772 sshd[3931]: Connection closed by 10.0.0.1 port 49582 Nov 1 00:23:15.833247 sshd-session[3928]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:15.840145 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:49582.service: Deactivated successfully. Nov 1 00:23:15.843727 systemd[1]: session-10.scope: Deactivated successfully. Nov 1 00:23:15.845053 systemd-logind[1575]: Session 10 logged out. Waiting for processes to exit. Nov 1 00:23:15.846671 systemd-logind[1575]: Removed session 10. Nov 1 00:23:18.599686 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1793552373.mount: Deactivated successfully. Nov 1 00:23:20.239210 containerd[1598]: time="2025-11-01T00:23:20.239120029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:20.243812 containerd[1598]: time="2025-11-01T00:23:20.243724289Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 1 00:23:20.251639 containerd[1598]: time="2025-11-01T00:23:20.251510519Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:20.270852 containerd[1598]: time="2025-11-01T00:23:20.270651786Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 1 00:23:20.271894 containerd[1598]: time="2025-11-01T00:23:20.271812784Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 11.377093599s" Nov 1 00:23:20.271894 containerd[1598]: time="2025-11-01T00:23:20.271874309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 1 00:23:20.298610 containerd[1598]: time="2025-11-01T00:23:20.298544543Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 1 00:23:20.327418 containerd[1598]: time="2025-11-01T00:23:20.327322231Z" level=info msg="Container 6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:23:20.345612 containerd[1598]: time="2025-11-01T00:23:20.345390094Z" level=info msg="CreateContainer within sandbox \"b7f4b8fe38b33c5cb55ffc4fa3ef303cdf9dcd4af3f4b1b3a6cd2b946400d6d7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\"" Nov 1 00:23:20.363016 containerd[1598]: time="2025-11-01T00:23:20.362924705Z" level=info msg="StartContainer for \"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\"" Nov 1 00:23:20.365810 containerd[1598]: time="2025-11-01T00:23:20.365744897Z" level=info msg="connecting to shim 6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858" address="unix:///run/containerd/s/76750c96fb5f231c51ec3fe1d59468a16a3e71b424b73caaa8960bd73b68cb14" protocol=ttrpc version=3 Nov 1 00:23:20.422359 systemd[1]: Started cri-containerd-6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858.scope - libcontainer container 6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858. Nov 1 00:23:20.842703 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 1 00:23:20.844125 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 1 00:23:20.853646 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:47894.service - OpenSSH per-connection server daemon (10.0.0.1:47894). Nov 1 00:23:20.949631 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 47894 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:20.952252 sshd-session[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:20.960488 systemd-logind[1575]: New session 11 of user core. Nov 1 00:23:20.972422 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 1 00:23:21.063732 kubelet[2776]: E1101 00:23:21.062866 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:21.064247 containerd[1598]: time="2025-11-01T00:23:21.063521532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:21.069254 containerd[1598]: time="2025-11-01T00:23:21.069102002Z" level=info msg="StartContainer for \"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\" returns successfully" Nov 1 00:23:21.072144 containerd[1598]: time="2025-11-01T00:23:21.072093445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:21.224090 sshd[3999]: Connection closed by 10.0.0.1 port 47894 Nov 1 00:23:21.225119 sshd-session[3996]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:21.232231 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:47894.service: Deactivated successfully. Nov 1 00:23:21.234912 systemd[1]: session-11.scope: Deactivated successfully. Nov 1 00:23:21.237262 systemd-logind[1575]: Session 11 logged out. Waiting for processes to exit. Nov 1 00:23:21.239819 systemd-logind[1575]: Removed session 11. Nov 1 00:23:21.531296 kubelet[2776]: I1101 00:23:21.531106 2776 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-backend-key-pair\") pod \"ea34f150-dd20-4f23-a1df-b723d0fd4094\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " Nov 1 00:23:21.531296 kubelet[2776]: I1101 00:23:21.531170 2776 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-ca-bundle\") pod \"ea34f150-dd20-4f23-a1df-b723d0fd4094\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " Nov 1 00:23:21.531296 kubelet[2776]: I1101 00:23:21.531197 2776 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bftnh\" (UniqueName: \"kubernetes.io/projected/ea34f150-dd20-4f23-a1df-b723d0fd4094-kube-api-access-bftnh\") pod \"ea34f150-dd20-4f23-a1df-b723d0fd4094\" (UID: \"ea34f150-dd20-4f23-a1df-b723d0fd4094\") " Nov 1 00:23:21.532293 kubelet[2776]: I1101 00:23:21.532223 2776 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "ea34f150-dd20-4f23-a1df-b723d0fd4094" (UID: "ea34f150-dd20-4f23-a1df-b723d0fd4094"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 1 00:23:21.538050 kubelet[2776]: I1101 00:23:21.537736 2776 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea34f150-dd20-4f23-a1df-b723d0fd4094-kube-api-access-bftnh" (OuterVolumeSpecName: "kube-api-access-bftnh") pod "ea34f150-dd20-4f23-a1df-b723d0fd4094" (UID: "ea34f150-dd20-4f23-a1df-b723d0fd4094"). InnerVolumeSpecName "kube-api-access-bftnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 1 00:23:21.538461 kubelet[2776]: I1101 00:23:21.538292 2776 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "ea34f150-dd20-4f23-a1df-b723d0fd4094" (UID: "ea34f150-dd20-4f23-a1df-b723d0fd4094"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 1 00:23:21.539285 systemd[1]: var-lib-kubelet-pods-ea34f150\x2ddd20\x2d4f23\x2da1df\x2db723d0fd4094-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbftnh.mount: Deactivated successfully. Nov 1 00:23:21.539478 systemd[1]: var-lib-kubelet-pods-ea34f150\x2ddd20\x2d4f23\x2da1df\x2db723d0fd4094-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 1 00:23:21.632164 kubelet[2776]: I1101 00:23:21.632112 2776 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:21.632164 kubelet[2776]: I1101 00:23:21.632160 2776 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea34f150-dd20-4f23-a1df-b723d0fd4094-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:21.632164 kubelet[2776]: I1101 00:23:21.632173 2776 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bftnh\" (UniqueName: \"kubernetes.io/projected/ea34f150-dd20-4f23-a1df-b723d0fd4094-kube-api-access-bftnh\") on node \"localhost\" DevicePath \"\"" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.505 [INFO][4061] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.505 [INFO][4061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" iface="eth0" netns="/var/run/netns/cni-4025428e-df77-3a0a-ddaf-d1782199c603" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.506 [INFO][4061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" iface="eth0" netns="/var/run/netns/cni-4025428e-df77-3a0a-ddaf-d1782199c603" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.507 [INFO][4061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" iface="eth0" netns="/var/run/netns/cni-4025428e-df77-3a0a-ddaf-d1782199c603" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.507 [INFO][4061] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.508 [INFO][4061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.599 [INFO][4089] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" HandleID="k8s-pod-network.3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.600 [INFO][4089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:21.640316 containerd[1598]: 2025-11-01 00:23:21.601 [INFO][4089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:21.643700 containerd[1598]: 2025-11-01 00:23:21.618 [WARNING][4089] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" HandleID="k8s-pod-network.3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:21.643700 containerd[1598]: 2025-11-01 00:23:21.618 [INFO][4089] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" HandleID="k8s-pod-network.3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:21.643700 containerd[1598]: 2025-11-01 00:23:21.626 [INFO][4089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:21.643700 containerd[1598]: 2025-11-01 00:23:21.632 [INFO][4061] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e" Nov 1 00:23:21.645245 systemd[1]: run-netns-cni\x2d4025428e\x2ddf77\x2d3a0a\x2dddaf\x2dd1782199c603.mount: Deactivated successfully. Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.585 [INFO][4062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.586 [INFO][4062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" iface="eth0" netns="/var/run/netns/cni-9ffc8685-9b60-ba3c-dbaa-367438e60633" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.586 [INFO][4062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" iface="eth0" netns="/var/run/netns/cni-9ffc8685-9b60-ba3c-dbaa-367438e60633" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.587 [INFO][4062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" iface="eth0" netns="/var/run/netns/cni-9ffc8685-9b60-ba3c-dbaa-367438e60633" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.587 [INFO][4062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.587 [INFO][4062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.622 [INFO][4098] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" HandleID="k8s-pod-network.c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.623 [INFO][4098] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:21.653980 containerd[1598]: 2025-11-01 00:23:21.626 [INFO][4098] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:21.654408 containerd[1598]: 2025-11-01 00:23:21.639 [WARNING][4098] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" HandleID="k8s-pod-network.c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:21.654408 containerd[1598]: 2025-11-01 00:23:21.639 [INFO][4098] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" HandleID="k8s-pod-network.c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:21.654408 containerd[1598]: 2025-11-01 00:23:21.641 [INFO][4098] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:21.654408 containerd[1598]: 2025-11-01 00:23:21.647 [INFO][4062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef" Nov 1 00:23:21.657159 systemd[1]: run-netns-cni\x2d9ffc8685\x2d9b60\x2dba3c\x2ddbaa\x2d367438e60633.mount: Deactivated successfully. Nov 1 00:23:21.687300 containerd[1598]: time="2025-11-01T00:23:21.687205764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:21.687971 kubelet[2776]: E1101 00:23:21.687863 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:21.688162 kubelet[2776]: E1101 00:23:21.688017 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:21.689613 containerd[1598]: time="2025-11-01T00:23:21.689552618Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:21.689965 kubelet[2776]: E1101 00:23:21.689854 2776 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 1 00:23:21.690094 kubelet[2776]: E1101 00:23:21.689980 2776 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:21.690773 kubelet[2776]: E1101 00:23:21.690707 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-kqpwg" Nov 1 00:23:21.690856 kubelet[2776]: E1101 00:23:21.690796 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-kqpwg_kube-system(bdbd4b4d-c838-47d2-b2da-3a95b5735d83)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-kqpwg_kube-system(bdbd4b4d-c838-47d2-b2da-3a95b5735d83)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7b617c28a5300f9ecaa1b68fa2e30edc0b6ebe75426b91cf6efddbf061dadef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-kqpwg" podUID="bdbd4b4d-c838-47d2-b2da-3a95b5735d83" Nov 1 00:23:21.690856 kubelet[2776]: E1101 00:23:21.690701 2776 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" Nov 1 00:23:21.690997 kubelet[2776]: E1101 00:23:21.690901 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ec4c6755192eb034a7411738831ec0f5b27c761c73c3a20b152c99f2a68697e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:21.751465 containerd[1598]: time="2025-11-01T00:23:21.751410994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwpmn,Uid:9cb8b2d7-16ad-4489-b82c-4e442c6904d5,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:22.000681 systemd-networkd[1518]: cali155678b7105: Link UP Nov 1 00:23:22.001558 systemd-networkd[1518]: cali155678b7105: Gained carrier Nov 1 00:23:22.046034 containerd[1598]: 2025-11-01 00:23:21.798 [INFO][4107] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.046034 containerd[1598]: 2025-11-01 00:23:21.844 [INFO][4107] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bwpmn-eth0 csi-node-driver- calico-system 9cb8b2d7-16ad-4489-b82c-4e442c6904d5 769 0 2025-11-01 00:22:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-bwpmn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali155678b7105 [] [] }} ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-" Nov 1 00:23:22.046034 containerd[1598]: 2025-11-01 00:23:21.844 [INFO][4107] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.046034 containerd[1598]: 2025-11-01 00:23:21.886 [INFO][4122] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" HandleID="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Workload="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.886 [INFO][4122] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" HandleID="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Workload="localhost-k8s-csi--node--driver--bwpmn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fed0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bwpmn", "timestamp":"2025-11-01 00:23:21.886480839 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.886 [INFO][4122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.886 [INFO][4122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.886 [INFO][4122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.912 [INFO][4122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" host="localhost" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.926 [INFO][4122] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.935 [INFO][4122] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.938 [INFO][4122] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.942 [INFO][4122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.046353 containerd[1598]: 2025-11-01 00:23:21.942 [INFO][4122] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" host="localhost" Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.945 [INFO][4122] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20 Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.974 [INFO][4122] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" host="localhost" Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.986 [INFO][4122] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" host="localhost" Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.986 [INFO][4122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" host="localhost" Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.986 [INFO][4122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.046654 containerd[1598]: 2025-11-01 00:23:21.986 [INFO][4122] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" HandleID="k8s-pod-network.81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Workload="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.046847 containerd[1598]: 2025-11-01 00:23:21.991 [INFO][4107] cni-plugin/k8s.go 418: Populated endpoint ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bwpmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cb8b2d7-16ad-4489-b82c-4e442c6904d5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bwpmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali155678b7105", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.046915 containerd[1598]: 2025-11-01 00:23:21.991 [INFO][4107] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.046915 containerd[1598]: 2025-11-01 00:23:21.991 [INFO][4107] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali155678b7105 ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.046915 containerd[1598]: 2025-11-01 00:23:22.001 [INFO][4107] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.047030 containerd[1598]: 2025-11-01 00:23:22.001 [INFO][4107] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bwpmn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9cb8b2d7-16ad-4489-b82c-4e442c6904d5", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20", Pod:"csi-node-driver-bwpmn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali155678b7105", MAC:"96:5c:67:12:e3:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.047100 containerd[1598]: 2025-11-01 00:23:22.041 [INFO][4107] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" Namespace="calico-system" Pod="csi-node-driver-bwpmn" WorkloadEndpoint="localhost-k8s-csi--node--driver--bwpmn-eth0" Nov 1 00:23:22.086309 containerd[1598]: time="2025-11-01T00:23:22.086220720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:22.086585 kubelet[2776]: E1101 00:23:22.086195 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.088081 kubelet[2776]: E1101 00:23:22.087885 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.088881 containerd[1598]: time="2025-11-01T00:23:22.088644918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:22.092836 systemd[1]: Removed slice kubepods-besteffort-podea34f150_dd20_4f23_a1df_b723d0fd4094.slice - libcontainer container kubepods-besteffort-podea34f150_dd20_4f23_a1df_b723d0fd4094.slice. Nov 1 00:23:22.141547 kubelet[2776]: I1101 00:23:22.141384 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6pxnw" podStartSLOduration=2.509182987 podStartE2EDuration="25.141319841s" podCreationTimestamp="2025-11-01 00:22:57 +0000 UTC" firstStartedPulling="2025-11-01 00:22:57.640842298 +0000 UTC m=+21.124999046" lastFinishedPulling="2025-11-01 00:23:20.272979152 +0000 UTC m=+43.757135900" observedRunningTime="2025-11-01 00:23:22.140370469 +0000 UTC m=+45.624527247" watchObservedRunningTime="2025-11-01 00:23:22.141319841 +0000 UTC m=+45.625476589" Nov 1 00:23:22.258805 containerd[1598]: time="2025-11-01T00:23:22.258513597Z" level=info msg="connecting to shim 81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20" address="unix:///run/containerd/s/a4222eee4e679c8055dea576860e3eb141b1d4793a909dea74ca7400e68d222e" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:22.355294 systemd[1]: Started cri-containerd-81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20.scope - libcontainer container 81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20. Nov 1 00:23:22.386041 systemd[1]: Created slice kubepods-besteffort-podbcc9bae5_bbac_4d40_8ba9_b09cdd29d916.slice - libcontainer container kubepods-besteffort-podbcc9bae5_bbac_4d40_8ba9_b09cdd29d916.slice. Nov 1 00:23:22.435820 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:22.441234 kubelet[2776]: I1101 00:23:22.440911 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lkqz\" (UniqueName: \"kubernetes.io/projected/bcc9bae5-bbac-4d40-8ba9-b09cdd29d916-kube-api-access-2lkqz\") pod \"whisker-6c77b7cd5b-rlkpf\" (UID: \"bcc9bae5-bbac-4d40-8ba9-b09cdd29d916\") " pod="calico-system/whisker-6c77b7cd5b-rlkpf" Nov 1 00:23:22.441636 kubelet[2776]: I1101 00:23:22.441526 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcc9bae5-bbac-4d40-8ba9-b09cdd29d916-whisker-ca-bundle\") pod \"whisker-6c77b7cd5b-rlkpf\" (UID: \"bcc9bae5-bbac-4d40-8ba9-b09cdd29d916\") " pod="calico-system/whisker-6c77b7cd5b-rlkpf" Nov 1 00:23:22.441636 kubelet[2776]: I1101 00:23:22.441583 2776 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bcc9bae5-bbac-4d40-8ba9-b09cdd29d916-whisker-backend-key-pair\") pod \"whisker-6c77b7cd5b-rlkpf\" (UID: \"bcc9bae5-bbac-4d40-8ba9-b09cdd29d916\") " pod="calico-system/whisker-6c77b7cd5b-rlkpf" Nov 1 00:23:22.467209 containerd[1598]: time="2025-11-01T00:23:22.467118083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\" id:\"3ace54564718bac395e7f5d901007246947aa80c5dd7610b93faa0ebe7a445e0\" pid:4212 exit_status:1 exited_at:{seconds:1761956602 nanos:434471039}" Nov 1 00:23:22.539915 containerd[1598]: time="2025-11-01T00:23:22.539141458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bwpmn,Uid:9cb8b2d7-16ad-4489-b82c-4e442c6904d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"81dc6fb4b0604407b5656e6b1abda5cd3587d4e165e0a86fdfa56876fedd1d20\"" Nov 1 00:23:22.547976 containerd[1598]: time="2025-11-01T00:23:22.547645734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:22.584457 systemd-networkd[1518]: cali948b244c5b5: Link UP Nov 1 00:23:22.587710 systemd-networkd[1518]: cali948b244c5b5: Gained carrier Nov 1 00:23:22.622180 containerd[1598]: 2025-11-01 00:23:22.170 [INFO][4164] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.622180 containerd[1598]: 2025-11-01 00:23:22.206 [INFO][4164] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0 coredns-668d6bf9bc- kube-system bdbd4b4d-c838-47d2-b2da-3a95b5735d83 989 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-kqpwg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali948b244c5b5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-" Nov 1 00:23:22.622180 containerd[1598]: 2025-11-01 00:23:22.206 [INFO][4164] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.622180 containerd[1598]: 2025-11-01 00:23:22.273 [INFO][4186] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" HandleID="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.274 [INFO][4186] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" HandleID="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012eb00), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-kqpwg", "timestamp":"2025-11-01 00:23:22.273654415 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.274 [INFO][4186] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.274 [INFO][4186] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.274 [INFO][4186] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.315 [INFO][4186] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" host="localhost" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.395 [INFO][4186] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.420 [INFO][4186] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.452 [INFO][4186] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.458 [INFO][4186] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.622639 containerd[1598]: 2025-11-01 00:23:22.458 [INFO][4186] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" host="localhost" Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.469 [INFO][4186] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0 Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.491 [INFO][4186] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" host="localhost" Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.549 [INFO][4186] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" host="localhost" Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.551 [INFO][4186] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" host="localhost" Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.551 [INFO][4186] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.623102 containerd[1598]: 2025-11-01 00:23:22.551 [INFO][4186] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" HandleID="k8s-pod-network.db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Workload="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.623364 containerd[1598]: 2025-11-01 00:23:22.566 [INFO][4164] cni-plugin/k8s.go 418: Populated endpoint ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bdbd4b4d-c838-47d2-b2da-3a95b5735d83", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-kqpwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali948b244c5b5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.623499 containerd[1598]: 2025-11-01 00:23:22.567 [INFO][4164] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.623499 containerd[1598]: 2025-11-01 00:23:22.567 [INFO][4164] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali948b244c5b5 ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.623499 containerd[1598]: 2025-11-01 00:23:22.588 [INFO][4164] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.623636 containerd[1598]: 2025-11-01 00:23:22.592 [INFO][4164] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bdbd4b4d-c838-47d2-b2da-3a95b5735d83", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0", Pod:"coredns-668d6bf9bc-kqpwg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali948b244c5b5", MAC:"ce:e1:5c:d8:0b:ed", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.623636 containerd[1598]: 2025-11-01 00:23:22.611 [INFO][4164] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" Namespace="kube-system" Pod="coredns-668d6bf9bc-kqpwg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--kqpwg-eth0" Nov 1 00:23:22.659450 containerd[1598]: time="2025-11-01T00:23:22.659395627Z" level=info msg="connecting to shim db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0" address="unix:///run/containerd/s/373591e89df618629f86ae1210074461768512c1772db7664fd55b52909fb9f3" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:22.669816 systemd-networkd[1518]: caliae2800a8186: Link UP Nov 1 00:23:22.671379 systemd-networkd[1518]: caliae2800a8186: Gained carrier Nov 1 00:23:22.695490 containerd[1598]: time="2025-11-01T00:23:22.695440948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c77b7cd5b-rlkpf,Uid:bcc9bae5-bbac-4d40-8ba9-b09cdd29d916,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.178 [INFO][4149] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.218 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0 calico-kube-controllers-7db858884d- calico-system f1b660e9-a196-4e94-8db8-ec0d5d3642c8 986 0 2025-11-01 00:22:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7db858884d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7db858884d-rlxtg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliae2800a8186 [] [] }} ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.218 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.378 [INFO][4236] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" HandleID="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.378 [INFO][4236] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" HandleID="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f8d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7db858884d-rlxtg", "timestamp":"2025-11-01 00:23:22.378017156 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.378 [INFO][4236] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.552 [INFO][4236] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.552 [INFO][4236] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.568 [INFO][4236] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.585 [INFO][4236] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.609 [INFO][4236] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.617 [INFO][4236] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.621 [INFO][4236] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.621 [INFO][4236] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.624 [INFO][4236] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.637 [INFO][4236] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.654 [INFO][4236] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.656 [INFO][4236] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" host="localhost" Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.656 [INFO][4236] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:22.698423 containerd[1598]: 2025-11-01 00:23:22.656 [INFO][4236] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" HandleID="k8s-pod-network.7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Workload="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.662 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0", GenerateName:"calico-kube-controllers-7db858884d-", Namespace:"calico-system", SelfLink:"", UID:"f1b660e9-a196-4e94-8db8-ec0d5d3642c8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db858884d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7db858884d-rlxtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2800a8186", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.663 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.663 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae2800a8186 ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.672 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.672 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0", GenerateName:"calico-kube-controllers-7db858884d-", Namespace:"calico-system", SelfLink:"", UID:"f1b660e9-a196-4e94-8db8-ec0d5d3642c8", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7db858884d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a", Pod:"calico-kube-controllers-7db858884d-rlxtg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliae2800a8186", MAC:"f6:62:ae:eb:af:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:22.699447 containerd[1598]: 2025-11-01 00:23:22.691 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" Namespace="calico-system" Pod="calico-kube-controllers-7db858884d-rlxtg" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7db858884d--rlxtg-eth0" Nov 1 00:23:22.702668 systemd[1]: Started cri-containerd-db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0.scope - libcontainer container db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0. Nov 1 00:23:22.726039 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:22.741018 containerd[1598]: time="2025-11-01T00:23:22.740793090Z" level=info msg="connecting to shim 7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a" address="unix:///run/containerd/s/a1773c416c64c0f611a8982713f9fe15541e0da3df5b52aed2c45fe912ea4fff" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:22.753435 kubelet[2776]: E1101 00:23:22.752441 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.754480 containerd[1598]: time="2025-11-01T00:23:22.754443561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-bf7v4,Uid:0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:22.754754 containerd[1598]: time="2025-11-01T00:23:22.754593102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9jkmd,Uid:80ec35e2-7ac0-4d9e-82fe-2398651b9031,Namespace:calico-system,Attempt:0,}" Nov 1 00:23:22.761469 containerd[1598]: time="2025-11-01T00:23:22.754643176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s78vh,Uid:34075b49-4ccc-4510-a747-480fc74d94d8,Namespace:kube-system,Attempt:0,}" Nov 1 00:23:22.765398 kubelet[2776]: I1101 00:23:22.765357 2776 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea34f150-dd20-4f23-a1df-b723d0fd4094" path="/var/lib/kubelet/pods/ea34f150-dd20-4f23-a1df-b723d0fd4094/volumes" Nov 1 00:23:22.793252 systemd[1]: Started cri-containerd-7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a.scope - libcontainer container 7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a. Nov 1 00:23:22.812195 containerd[1598]: time="2025-11-01T00:23:22.812132461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-kqpwg,Uid:bdbd4b4d-c838-47d2-b2da-3a95b5735d83,Namespace:kube-system,Attempt:0,} returns sandbox id \"db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0\"" Nov 1 00:23:22.816612 kubelet[2776]: E1101 00:23:22.816567 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:22.819792 containerd[1598]: time="2025-11-01T00:23:22.819753059Z" level=info msg="CreateContainer within sandbox \"db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:22.850404 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:22.866877 containerd[1598]: time="2025-11-01T00:23:22.866397036Z" level=info msg="Container 91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:23:22.879234 containerd[1598]: time="2025-11-01T00:23:22.879159932Z" level=info msg="CreateContainer within sandbox \"db8f0414a383f10abc6eef103293d11be75ec2ad6a84761e18f319caf44851e0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba\"" Nov 1 00:23:22.880536 containerd[1598]: time="2025-11-01T00:23:22.880473917Z" level=info msg="StartContainer for \"91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba\"" Nov 1 00:23:22.883283 containerd[1598]: time="2025-11-01T00:23:22.883203870Z" level=info msg="connecting to shim 91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba" address="unix:///run/containerd/s/373591e89df618629f86ae1210074461768512c1772db7664fd55b52909fb9f3" protocol=ttrpc version=3 Nov 1 00:23:22.888326 containerd[1598]: time="2025-11-01T00:23:22.888249516Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:22.893527 containerd[1598]: time="2025-11-01T00:23:22.893211206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:22.906580 containerd[1598]: time="2025-11-01T00:23:22.906502704Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:22.907627 kubelet[2776]: E1101 00:23:22.907505 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:22.907772 kubelet[2776]: E1101 00:23:22.907675 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:22.921366 kubelet[2776]: E1101 00:23:22.919819 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:22.931008 containerd[1598]: time="2025-11-01T00:23:22.930956426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:22.964262 containerd[1598]: time="2025-11-01T00:23:22.963351277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7db858884d-rlxtg,Uid:f1b660e9-a196-4e94-8db8-ec0d5d3642c8,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b9dfe193c221ae6b0305ade1489d4c87dec754c6c9781a37b8c3881e3d2850a\"" Nov 1 00:23:22.969222 systemd[1]: Started cri-containerd-91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba.scope - libcontainer container 91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba. Nov 1 00:23:23.035101 systemd-networkd[1518]: cali8c61fc2d85d: Link UP Nov 1 00:23:23.035482 systemd-networkd[1518]: cali8c61fc2d85d: Gained carrier Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.734 [INFO][4319] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.752 [INFO][4319] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0 whisker-6c77b7cd5b- calico-system bcc9bae5-bbac-4d40-8ba9-b09cdd29d916 1027 0 2025-11-01 00:23:22 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6c77b7cd5b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6c77b7cd5b-rlkpf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali8c61fc2d85d [] [] }} ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.753 [INFO][4319] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.854 [INFO][4371] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" HandleID="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Workload="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.855 [INFO][4371] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" HandleID="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Workload="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6c77b7cd5b-rlkpf", "timestamp":"2025-11-01 00:23:22.854717932 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.855 [INFO][4371] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.857 [INFO][4371] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.857 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.869 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.878 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.893 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.901 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.912 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.912 [INFO][4371] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.919 [INFO][4371] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65 Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.947 [INFO][4371] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.964 [INFO][4371] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.964 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" host="localhost" Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.964 [INFO][4371] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.120148 containerd[1598]: 2025-11-01 00:23:22.964 [INFO][4371] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" HandleID="k8s-pod-network.21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Workload="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:22.993 [INFO][4319] cni-plugin/k8s.go 418: Populated endpoint ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0", GenerateName:"whisker-6c77b7cd5b-", Namespace:"calico-system", SelfLink:"", UID:"bcc9bae5-bbac-4d40-8ba9-b09cdd29d916", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c77b7cd5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6c77b7cd5b-rlkpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c61fc2d85d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:22.996 [INFO][4319] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:22.996 [INFO][4319] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8c61fc2d85d ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:23.048 [INFO][4319] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:23.055 [INFO][4319] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0", GenerateName:"whisker-6c77b7cd5b-", Namespace:"calico-system", SelfLink:"", UID:"bcc9bae5-bbac-4d40-8ba9-b09cdd29d916", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 23, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6c77b7cd5b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65", Pod:"whisker-6c77b7cd5b-rlkpf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali8c61fc2d85d", MAC:"ca:c1:5d:56:c2:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.121201 containerd[1598]: 2025-11-01 00:23:23.087 [INFO][4319] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" Namespace="calico-system" Pod="whisker-6c77b7cd5b-rlkpf" WorkloadEndpoint="localhost-k8s-whisker--6c77b7cd5b--rlkpf-eth0" Nov 1 00:23:23.144897 kubelet[2776]: E1101 00:23:23.143860 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:23.185789 containerd[1598]: time="2025-11-01T00:23:23.185495387Z" level=info msg="StartContainer for \"91c8e2ad989ba826b6074c703660a98edc92ce23906cbedb6a0f6fd4999ceaba\" returns successfully" Nov 1 00:23:23.225090 systemd-networkd[1518]: cali0c252772789: Link UP Nov 1 00:23:23.228886 systemd-networkd[1518]: cali0c252772789: Gained carrier Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.861 [INFO][4394] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.883 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--s78vh-eth0 coredns-668d6bf9bc- kube-system 34075b49-4ccc-4510-a747-480fc74d94d8 884 0 2025-11-01 00:22:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-s78vh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0c252772789 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.883 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.981 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" HandleID="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Workload="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.981 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" HandleID="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Workload="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-s78vh", "timestamp":"2025-11-01 00:23:22.981396705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.981 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.982 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:22.982 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.009 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.078 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.122 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.148 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.164 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.164 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.169 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.181 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.205 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.205 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" host="localhost" Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.207 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.300225 containerd[1598]: 2025-11-01 00:23:23.208 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" HandleID="k8s-pod-network.d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Workload="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.215 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s78vh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"34075b49-4ccc-4510-a747-480fc74d94d8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-s78vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c252772789", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.215 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.215 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c252772789 ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.239 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.240 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--s78vh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"34075b49-4ccc-4510-a747-480fc74d94d8", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d", Pod:"coredns-668d6bf9bc-s78vh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0c252772789", MAC:"2a:61:cd:37:24:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.304608 containerd[1598]: 2025-11-01 00:23:23.286 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" Namespace="kube-system" Pod="coredns-668d6bf9bc-s78vh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--s78vh-eth0" Nov 1 00:23:23.327564 systemd-networkd[1518]: cali155678b7105: Gained IPv6LL Nov 1 00:23:23.345520 containerd[1598]: time="2025-11-01T00:23:23.344178850Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:23.358841 containerd[1598]: time="2025-11-01T00:23:23.358745881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:23.359454 containerd[1598]: time="2025-11-01T00:23:23.359384809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:23.359808 kubelet[2776]: E1101 00:23:23.359747 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:23.360405 kubelet[2776]: E1101 00:23:23.360362 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:23.361372 containerd[1598]: time="2025-11-01T00:23:23.361291547Z" level=info msg="connecting to shim 21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65" address="unix:///run/containerd/s/49ee79f7b847a41493e2a02f563c3f62982202925b349184cb05b1f1fdf131b9" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:23.363892 kubelet[2776]: E1101 00:23:23.362571 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:23.364176 containerd[1598]: time="2025-11-01T00:23:23.364131455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:23.364496 kubelet[2776]: E1101 00:23:23.364402 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:23.427008 containerd[1598]: time="2025-11-01T00:23:23.425408243Z" level=info msg="connecting to shim d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d" address="unix:///run/containerd/s/c8ae663d208c464e8173fc022abf7512c89a362ed37a170720a8f62bb51d01d5" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:23.433205 systemd[1]: Started cri-containerd-21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65.scope - libcontainer container 21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65. Nov 1 00:23:23.467676 systemd-networkd[1518]: calidaf1a5df1ed: Link UP Nov 1 00:23:23.472544 systemd-networkd[1518]: calidaf1a5df1ed: Gained carrier Nov 1 00:23:23.495659 systemd[1]: Started cri-containerd-d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d.scope - libcontainer container d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d. Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:22.870 [INFO][4374] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:22.912 [INFO][4374] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0 calico-apiserver-65dff998bf- calico-apiserver 0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf 887 0 2025-11-01 00:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dff998bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65dff998bf-bf7v4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidaf1a5df1ed [] [] }} ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:22.913 [INFO][4374] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.054 [INFO][4472] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" HandleID="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Workload="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.055 [INFO][4472] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" HandleID="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Workload="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000135830), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65dff998bf-bf7v4", "timestamp":"2025-11-01 00:23:23.054575955 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.058 [INFO][4472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.207 [INFO][4472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.208 [INFO][4472] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.250 [INFO][4472] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.283 [INFO][4472] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.344 [INFO][4472] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.359 [INFO][4472] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.381 [INFO][4472] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.381 [INFO][4472] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.390 [INFO][4472] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.401 [INFO][4472] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.423 [INFO][4472] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.425 [INFO][4472] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" host="localhost" Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.428 [INFO][4472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.507613 containerd[1598]: 2025-11-01 00:23:23.428 [INFO][4472] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" HandleID="k8s-pod-network.ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Workload="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.450 [INFO][4374] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0", GenerateName:"calico-apiserver-65dff998bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dff998bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65dff998bf-bf7v4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaf1a5df1ed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.450 [INFO][4374] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.450 [INFO][4374] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidaf1a5df1ed ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.477 [INFO][4374] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.479 [INFO][4374] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0", GenerateName:"calico-apiserver-65dff998bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf", ResourceVersion:"887", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dff998bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f", Pod:"calico-apiserver-65dff998bf-bf7v4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidaf1a5df1ed", MAC:"be:31:e4:21:ee:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.508835 containerd[1598]: 2025-11-01 00:23:23.503 [INFO][4374] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-bf7v4" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--bf7v4-eth0" Nov 1 00:23:23.514748 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.534593 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.642424 containerd[1598]: time="2025-11-01T00:23:23.641347039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-s78vh,Uid:34075b49-4ccc-4510-a747-480fc74d94d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d\"" Nov 1 00:23:23.642830 kubelet[2776]: E1101 00:23:23.642793 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:23.645922 containerd[1598]: time="2025-11-01T00:23:23.645847152Z" level=info msg="CreateContainer within sandbox \"d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 1 00:23:23.663010 containerd[1598]: time="2025-11-01T00:23:23.662902110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\" id:\"ed87dcc5580390cbc2d41091d48714fd827a14bf8afbd327793a9950d1620356\" pid:4617 exit_status:1 exited_at:{seconds:1761956603 nanos:662422691}" Nov 1 00:23:23.674247 systemd-networkd[1518]: calid0c4348ca83: Link UP Nov 1 00:23:23.677430 systemd-networkd[1518]: calid0c4348ca83: Gained carrier Nov 1 00:23:23.703112 containerd[1598]: time="2025-11-01T00:23:23.702977880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c77b7cd5b-rlkpf,Uid:bcc9bae5-bbac-4d40-8ba9-b09cdd29d916,Namespace:calico-system,Attempt:0,} returns sandbox id \"21509f85d6b89a9bf15b751ec9b9197e08d14a21a589c652ecf4c85c87078d65\"" Nov 1 00:23:23.704709 containerd[1598]: time="2025-11-01T00:23:23.704665447Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:22.847 [INFO][4414] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:22.869 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--666569f655--9jkmd-eth0 goldmane-666569f655- calico-system 80ec35e2-7ac0-4d9e-82fe-2398651b9031 888 0 2025-11-01 00:22:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-666569f655-9jkmd eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid0c4348ca83 [] [] }} ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:22.869 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.157 [INFO][4438] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" HandleID="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Workload="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.159 [INFO][4438] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" HandleID="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Workload="localhost-k8s-goldmane--666569f655--9jkmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-666569f655-9jkmd", "timestamp":"2025-11-01 00:23:23.157322438 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.160 [INFO][4438] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.428 [INFO][4438] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.432 [INFO][4438] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.469 [INFO][4438] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.498 [INFO][4438] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.524 [INFO][4438] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.531 [INFO][4438] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.545 [INFO][4438] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.547 [INFO][4438] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.555 [INFO][4438] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45 Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.576 [INFO][4438] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.658 [INFO][4438] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.658 [INFO][4438] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" host="localhost" Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.658 [INFO][4438] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:23.715338 containerd[1598]: 2025-11-01 00:23:23.658 [INFO][4438] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" HandleID="k8s-pod-network.3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Workload="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.664 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9jkmd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"80ec35e2-7ac0-4d9e-82fe-2398651b9031", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-666569f655-9jkmd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0c4348ca83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.664 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.667 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0c4348ca83 ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.676 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.682 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--666569f655--9jkmd-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"80ec35e2-7ac0-4d9e-82fe-2398651b9031", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45", Pod:"goldmane-666569f655-9jkmd", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid0c4348ca83", MAC:"ba:d0:8c:36:20:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:23.716281 containerd[1598]: 2025-11-01 00:23:23.710 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" Namespace="calico-system" Pod="goldmane-666569f655-9jkmd" WorkloadEndpoint="localhost-k8s-goldmane--666569f655--9jkmd-eth0" Nov 1 00:23:23.724731 containerd[1598]: time="2025-11-01T00:23:23.724647928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:23.724948 containerd[1598]: time="2025-11-01T00:23:23.724767082Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:23.725183 kubelet[2776]: E1101 00:23:23.725087 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.725284 kubelet[2776]: E1101 00:23:23.725228 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:23.726128 containerd[1598]: time="2025-11-01T00:23:23.726055409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:23.728467 kubelet[2776]: E1101 00:23:23.728383 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gq2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:23.729688 kubelet[2776]: E1101 00:23:23.729628 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:23.736858 containerd[1598]: time="2025-11-01T00:23:23.736789157Z" level=info msg="connecting to shim ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f" address="unix:///run/containerd/s/aa9cfd42c3573cb90420013aa0a4d05186031ff00308b86f8a696e751831115f" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:23.741126 containerd[1598]: time="2025-11-01T00:23:23.741075028Z" level=info msg="Container ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04: CDI devices from CRI Config.CDIDevices: []" Nov 1 00:23:23.760342 containerd[1598]: time="2025-11-01T00:23:23.760234726Z" level=info msg="CreateContainer within sandbox \"d01240579399a546d6688dff83c20da101f251fe6adcf66efc9db7e84e312d8d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04\"" Nov 1 00:23:23.762240 containerd[1598]: time="2025-11-01T00:23:23.762186377Z" level=info msg="StartContainer for \"ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04\"" Nov 1 00:23:23.765125 containerd[1598]: time="2025-11-01T00:23:23.765053086Z" level=info msg="connecting to shim ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04" address="unix:///run/containerd/s/c8ae663d208c464e8173fc022abf7512c89a362ed37a170720a8f62bb51d01d5" protocol=ttrpc version=3 Nov 1 00:23:23.790386 containerd[1598]: time="2025-11-01T00:23:23.790291578Z" level=info msg="connecting to shim 3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45" address="unix:///run/containerd/s/87c5850b0981399c5015f57d7e3c0abf330c019143310a630a7a6489c61e3c70" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:23.794503 systemd[1]: Started cri-containerd-ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f.scope - libcontainer container ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f. Nov 1 00:23:23.815330 systemd[1]: Started cri-containerd-ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04.scope - libcontainer container ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04. Nov 1 00:23:23.823354 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.850792 systemd[1]: Started cri-containerd-3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45.scope - libcontainer container 3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45. Nov 1 00:23:23.884680 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:23.890135 containerd[1598]: time="2025-11-01T00:23:23.890083789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-bf7v4,Uid:0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ad9068ebc4887cda8416d7129564b1d3d4834351d943506b7715d16eb0fce60f\"" Nov 1 00:23:23.904494 containerd[1598]: time="2025-11-01T00:23:23.904404398Z" level=info msg="StartContainer for \"ac8e8c07a88f4e7b9ebcfdc79be134aca8880402f69aab944218058bd401da04\" returns successfully" Nov 1 00:23:23.955505 containerd[1598]: time="2025-11-01T00:23:23.953527743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9jkmd,Uid:80ec35e2-7ac0-4d9e-82fe-2398651b9031,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c090145f01bf03f9214c48118e729dd442ffb63607d3b5affb32850c282db45\"" Nov 1 00:23:23.966158 systemd-networkd[1518]: cali948b244c5b5: Gained IPv6LL Nov 1 00:23:24.048875 containerd[1598]: time="2025-11-01T00:23:24.048798187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:24.052662 containerd[1598]: time="2025-11-01T00:23:24.052571076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:24.052860 containerd[1598]: time="2025-11-01T00:23:24.052662338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:24.052912 kubelet[2776]: E1101 00:23:24.052875 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:24.053020 kubelet[2776]: E1101 00:23:24.052978 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:24.053423 kubelet[2776]: E1101 00:23:24.053364 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34c059f2a6b44c5584ae5b7b85878e40,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.054053 containerd[1598]: time="2025-11-01T00:23:24.054003324Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:24.154826 kubelet[2776]: E1101 00:23:24.154767 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.158635 kubelet[2776]: E1101 00:23:24.158562 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.167426 kubelet[2776]: E1101 00:23:24.167343 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:24.168429 kubelet[2776]: E1101 00:23:24.168344 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:24.196807 kubelet[2776]: I1101 00:23:24.196406 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-kqpwg" podStartSLOduration=43.196298811 podStartE2EDuration="43.196298811s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:24.177672275 +0000 UTC m=+47.661829053" watchObservedRunningTime="2025-11-01 00:23:24.196298811 +0000 UTC m=+47.680455559" Nov 1 00:23:24.243245 kubelet[2776]: I1101 00:23:24.242851 2776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-s78vh" podStartSLOduration=43.242822266 podStartE2EDuration="43.242822266s" podCreationTimestamp="2025-11-01 00:22:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 00:23:24.214965794 +0000 UTC m=+47.699122572" watchObservedRunningTime="2025-11-01 00:23:24.242822266 +0000 UTC m=+47.726979014" Nov 1 00:23:24.394716 containerd[1598]: time="2025-11-01T00:23:24.394655679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:24.397396 containerd[1598]: time="2025-11-01T00:23:24.397266808Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:24.397396 containerd[1598]: time="2025-11-01T00:23:24.397338122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:24.397651 kubelet[2776]: E1101 00:23:24.397602 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:24.397736 kubelet[2776]: E1101 00:23:24.397670 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:24.397981 kubelet[2776]: E1101 00:23:24.397910 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqmjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-bf7v4_calico-apiserver(0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.398459 containerd[1598]: time="2025-11-01T00:23:24.398419390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:24.399610 kubelet[2776]: E1101 00:23:24.399565 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:23:24.542043 kubelet[2776]: I1101 00:23:24.540601 2776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 1 00:23:24.542043 kubelet[2776]: E1101 00:23:24.541142 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:24.543126 systemd-networkd[1518]: caliae2800a8186: Gained IPv6LL Nov 1 00:23:24.731770 containerd[1598]: time="2025-11-01T00:23:24.731550865Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:24.733345 systemd-networkd[1518]: cali0c252772789: Gained IPv6LL Nov 1 00:23:24.735275 containerd[1598]: time="2025-11-01T00:23:24.735070308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:24.735275 containerd[1598]: time="2025-11-01T00:23:24.735218747Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:24.735669 kubelet[2776]: E1101 00:23:24.735621 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:24.735921 kubelet[2776]: E1101 00:23:24.735846 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:24.736542 kubelet[2776]: E1101 00:23:24.736422 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpht2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9jkmd_calico-system(80ec35e2-7ac0-4d9e-82fe-2398651b9031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:24.738053 containerd[1598]: time="2025-11-01T00:23:24.737172192Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:24.738454 kubelet[2776]: E1101 00:23:24.738347 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:23:24.751676 containerd[1598]: time="2025-11-01T00:23:24.751593338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-kplcg,Uid:8e740e5a-3e3f-487f-be71-f50848ddb11c,Namespace:calico-apiserver,Attempt:0,}" Nov 1 00:23:24.797287 systemd-networkd[1518]: cali8c61fc2d85d: Gained IPv6LL Nov 1 00:23:24.963488 systemd-networkd[1518]: cali1895712bd41: Link UP Nov 1 00:23:24.967615 systemd-networkd[1518]: cali1895712bd41: Gained carrier Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.812 [INFO][4905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.831 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0 calico-apiserver-65dff998bf- calico-apiserver 8e740e5a-3e3f-487f-be71-f50848ddb11c 886 0 2025-11-01 00:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dff998bf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65dff998bf-kplcg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1895712bd41 [] [] }} ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.831 [INFO][4905] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.883 [INFO][4921] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" HandleID="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Workload="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.884 [INFO][4921] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" HandleID="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Workload="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000324120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65dff998bf-kplcg", "timestamp":"2025-11-01 00:23:24.883862204 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.884 [INFO][4921] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.884 [INFO][4921] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.884 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.893 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.906 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.917 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.920 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.928 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.928 [INFO][4921] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.931 [INFO][4921] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120 Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.939 [INFO][4921] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.950 [INFO][4921] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.950 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" host="localhost" Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.951 [INFO][4921] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 1 00:23:25.002736 containerd[1598]: 2025-11-01 00:23:24.951 [INFO][4921] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" HandleID="k8s-pod-network.a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Workload="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.955 [INFO][4905] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0", GenerateName:"calico-apiserver-65dff998bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e740e5a-3e3f-487f-be71-f50848ddb11c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dff998bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65dff998bf-kplcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1895712bd41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.955 [INFO][4905] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.955 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1895712bd41 ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.968 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.975 [INFO][4905] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0", GenerateName:"calico-apiserver-65dff998bf-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e740e5a-3e3f-487f-be71-f50848ddb11c", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2025, time.November, 1, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dff998bf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120", Pod:"calico-apiserver-65dff998bf-kplcg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1895712bd41", MAC:"de:ff:ca:4a:e2:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 1 00:23:25.003576 containerd[1598]: 2025-11-01 00:23:24.996 [INFO][4905] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" Namespace="calico-apiserver" Pod="calico-apiserver-65dff998bf-kplcg" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dff998bf--kplcg-eth0" Nov 1 00:23:25.052005 containerd[1598]: time="2025-11-01T00:23:25.050359747Z" level=info msg="connecting to shim a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120" address="unix:///run/containerd/s/664190a822310c3e8facc1315a0ed81794d13076ddbf880695966b5972068100" namespace=k8s.io protocol=ttrpc version=3 Nov 1 00:23:25.053685 containerd[1598]: time="2025-11-01T00:23:25.053624171Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:25.057275 containerd[1598]: time="2025-11-01T00:23:25.057144726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:25.057577 containerd[1598]: time="2025-11-01T00:23:25.057195442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:25.058347 kubelet[2776]: E1101 00:23:25.058260 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:25.058429 kubelet[2776]: E1101 00:23:25.058356 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:25.059278 kubelet[2776]: E1101 00:23:25.059214 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:25.060600 kubelet[2776]: E1101 00:23:25.060455 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:23:25.093292 systemd[1]: Started cri-containerd-a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120.scope - libcontainer container a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120. Nov 1 00:23:25.115128 systemd-resolved[1306]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 1 00:23:25.165334 containerd[1598]: time="2025-11-01T00:23:25.165241758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dff998bf-kplcg,Uid:8e740e5a-3e3f-487f-be71-f50848ddb11c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a4c7b46c4892789a1724fef045f6417bdedfdc518e8183d40f3d7ba043be8120\"" Nov 1 00:23:25.167325 containerd[1598]: time="2025-11-01T00:23:25.167250766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:25.177477 kubelet[2776]: E1101 00:23:25.177427 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.178273 kubelet[2776]: E1101 00:23:25.178230 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.178344 kubelet[2776]: E1101 00:23:25.178295 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:25.178909 kubelet[2776]: E1101 00:23:25.178865 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:23:25.179368 kubelet[2776]: E1101 00:23:25.179338 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:23:25.179484 kubelet[2776]: E1101 00:23:25.179435 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:23:25.375532 systemd-networkd[1518]: calidaf1a5df1ed: Gained IPv6LL Nov 1 00:23:25.502145 systemd-networkd[1518]: calid0c4348ca83: Gained IPv6LL Nov 1 00:23:25.510809 containerd[1598]: time="2025-11-01T00:23:25.510723486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:25.545413 containerd[1598]: time="2025-11-01T00:23:25.545307469Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:25.545632 containerd[1598]: time="2025-11-01T00:23:25.545446571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:25.545765 kubelet[2776]: E1101 00:23:25.545710 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.545824 kubelet[2776]: E1101 00:23:25.545780 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:25.546475 kubelet[2776]: E1101 00:23:25.546379 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mhhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-kplcg_calico-apiserver(8e740e5a-3e3f-487f-be71-f50848ddb11c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:25.548002 kubelet[2776]: E1101 00:23:25.547960 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:23:26.075170 systemd-networkd[1518]: vxlan.calico: Link UP Nov 1 00:23:26.075508 systemd-networkd[1518]: vxlan.calico: Gained carrier Nov 1 00:23:26.180861 kubelet[2776]: E1101 00:23:26.180302 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:26.183503 kubelet[2776]: E1101 00:23:26.183412 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:26.191681 kubelet[2776]: E1101 00:23:26.191582 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:23:26.242408 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:60554.service - OpenSSH per-connection server daemon (10.0.0.1:60554). Nov 1 00:23:26.333109 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 60554 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:26.337173 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:26.344604 systemd-logind[1575]: New session 12 of user core. Nov 1 00:23:26.357073 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 1 00:23:26.525881 systemd-networkd[1518]: cali1895712bd41: Gained IPv6LL Nov 1 00:23:26.561128 sshd[5083]: Connection closed by 10.0.0.1 port 60554 Nov 1 00:23:26.561523 sshd-session[5078]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:26.568854 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:60554.service: Deactivated successfully. Nov 1 00:23:26.572096 systemd[1]: session-12.scope: Deactivated successfully. Nov 1 00:23:26.574804 systemd-logind[1575]: Session 12 logged out. Waiting for processes to exit. Nov 1 00:23:26.579777 systemd-logind[1575]: Removed session 12. Nov 1 00:23:28.061309 systemd-networkd[1518]: vxlan.calico: Gained IPv6LL Nov 1 00:23:31.574365 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:60558.service - OpenSSH per-connection server daemon (10.0.0.1:60558). Nov 1 00:23:31.637730 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 60558 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:31.639561 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:31.644594 systemd-logind[1575]: New session 13 of user core. Nov 1 00:23:31.657238 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 1 00:23:31.794267 sshd[5154]: Connection closed by 10.0.0.1 port 60558 Nov 1 00:23:31.794726 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:31.806436 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:60558.service: Deactivated successfully. Nov 1 00:23:31.809157 systemd[1]: session-13.scope: Deactivated successfully. Nov 1 00:23:31.810328 systemd-logind[1575]: Session 13 logged out. Waiting for processes to exit. Nov 1 00:23:31.814751 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Nov 1 00:23:31.815843 systemd-logind[1575]: Removed session 13. Nov 1 00:23:31.884580 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:31.886711 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:31.891818 systemd-logind[1575]: New session 14 of user core. Nov 1 00:23:31.906285 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 1 00:23:32.070534 sshd[5172]: Connection closed by 10.0.0.1 port 60562 Nov 1 00:23:32.072159 sshd-session[5169]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:32.086801 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:60562.service: Deactivated successfully. Nov 1 00:23:32.090397 systemd[1]: session-14.scope: Deactivated successfully. Nov 1 00:23:32.093889 systemd-logind[1575]: Session 14 logged out. Waiting for processes to exit. Nov 1 00:23:32.097867 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:60568.service - OpenSSH per-connection server daemon (10.0.0.1:60568). Nov 1 00:23:32.100431 systemd-logind[1575]: Removed session 14. Nov 1 00:23:32.171965 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 60568 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:32.173750 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:32.179042 systemd-logind[1575]: New session 15 of user core. Nov 1 00:23:32.188163 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 1 00:23:32.320693 sshd[5186]: Connection closed by 10.0.0.1 port 60568 Nov 1 00:23:32.321108 sshd-session[5183]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:32.327175 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:60568.service: Deactivated successfully. Nov 1 00:23:32.330279 systemd[1]: session-15.scope: Deactivated successfully. Nov 1 00:23:32.331784 systemd-logind[1575]: Session 15 logged out. Waiting for processes to exit. Nov 1 00:23:32.333266 systemd-logind[1575]: Removed session 15. Nov 1 00:23:35.758955 containerd[1598]: time="2025-11-01T00:23:35.758601348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:23:36.070542 containerd[1598]: time="2025-11-01T00:23:36.070466143Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:36.071781 containerd[1598]: time="2025-11-01T00:23:36.071703674Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:23:36.071843 containerd[1598]: time="2025-11-01T00:23:36.071768138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:36.072087 kubelet[2776]: E1101 00:23:36.072008 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:36.072087 kubelet[2776]: E1101 00:23:36.072083 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:23:36.072663 kubelet[2776]: E1101 00:23:36.072449 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gq2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.072851 containerd[1598]: time="2025-11-01T00:23:36.072825082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:23:36.074214 kubelet[2776]: E1101 00:23:36.074171 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:36.429407 containerd[1598]: time="2025-11-01T00:23:36.429246609Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:36.497144 containerd[1598]: time="2025-11-01T00:23:36.497077351Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:23:36.497327 containerd[1598]: time="2025-11-01T00:23:36.497106046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:23:36.497457 kubelet[2776]: E1101 00:23:36.497391 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:36.497516 kubelet[2776]: E1101 00:23:36.497461 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:23:36.497637 kubelet[2776]: E1101 00:23:36.497601 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34c059f2a6b44c5584ae5b7b85878e40,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.499598 containerd[1598]: time="2025-11-01T00:23:36.499521843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:23:36.872667 containerd[1598]: time="2025-11-01T00:23:36.872593162Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:36.873955 containerd[1598]: time="2025-11-01T00:23:36.873879135Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:23:36.874056 containerd[1598]: time="2025-11-01T00:23:36.874022231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:23:36.874205 kubelet[2776]: E1101 00:23:36.874166 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.874269 kubelet[2776]: E1101 00:23:36.874217 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:23:36.874542 kubelet[2776]: E1101 00:23:36.874433 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:36.874692 containerd[1598]: time="2025-11-01T00:23:36.874562199Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:36.876048 kubelet[2776]: E1101 00:23:36.876008 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:23:37.185971 containerd[1598]: time="2025-11-01T00:23:37.185711783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:37.189403 containerd[1598]: time="2025-11-01T00:23:37.189266237Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:37.189403 containerd[1598]: time="2025-11-01T00:23:37.189325761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:37.189814 kubelet[2776]: E1101 00:23:37.189581 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.189814 kubelet[2776]: E1101 00:23:37.189646 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:37.190710 containerd[1598]: time="2025-11-01T00:23:37.190619017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:23:37.190913 kubelet[2776]: E1101 00:23:37.190852 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqmjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-bf7v4_calico-apiserver(0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.192325 kubelet[2776]: E1101 00:23:37.192289 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:23:37.345918 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:44832.service - OpenSSH per-connection server daemon (10.0.0.1:44832). Nov 1 00:23:37.466756 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 44832 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:37.469345 sshd-session[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:37.476625 systemd-logind[1575]: New session 16 of user core. Nov 1 00:23:37.484300 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 1 00:23:37.503948 containerd[1598]: time="2025-11-01T00:23:37.503116018Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:37.504903 containerd[1598]: time="2025-11-01T00:23:37.504803783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:23:37.505135 containerd[1598]: time="2025-11-01T00:23:37.504816798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:23:37.505315 kubelet[2776]: E1101 00:23:37.505253 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:37.505418 kubelet[2776]: E1101 00:23:37.505333 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:23:37.505559 kubelet[2776]: E1101 00:23:37.505506 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.508131 containerd[1598]: time="2025-11-01T00:23:37.508065744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:23:37.676219 sshd[5217]: Connection closed by 10.0.0.1 port 44832 Nov 1 00:23:37.676848 sshd-session[5214]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:37.684177 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:44832.service: Deactivated successfully. Nov 1 00:23:37.687247 systemd[1]: session-16.scope: Deactivated successfully. Nov 1 00:23:37.690439 systemd-logind[1575]: Session 16 logged out. Waiting for processes to exit. Nov 1 00:23:37.692475 systemd-logind[1575]: Removed session 16. Nov 1 00:23:37.814693 containerd[1598]: time="2025-11-01T00:23:37.814614529Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:37.839143 containerd[1598]: time="2025-11-01T00:23:37.839060847Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:23:37.839318 containerd[1598]: time="2025-11-01T00:23:37.839079663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:23:37.839522 kubelet[2776]: E1101 00:23:37.839448 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:37.839637 kubelet[2776]: E1101 00:23:37.839535 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:23:37.840446 kubelet[2776]: E1101 00:23:37.840385 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:37.841612 kubelet[2776]: E1101 00:23:37.841559 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:38.762944 containerd[1598]: time="2025-11-01T00:23:38.762489657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:23:39.107246 containerd[1598]: time="2025-11-01T00:23:39.107166784Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:39.108976 containerd[1598]: time="2025-11-01T00:23:39.108861718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:23:39.108976 containerd[1598]: time="2025-11-01T00:23:39.108966218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:39.109287 kubelet[2776]: E1101 00:23:39.109228 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.109853 kubelet[2776]: E1101 00:23:39.109298 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:23:39.109853 kubelet[2776]: E1101 00:23:39.109651 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mhhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-kplcg_calico-apiserver(8e740e5a-3e3f-487f-be71-f50848ddb11c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.110151 containerd[1598]: time="2025-11-01T00:23:39.110028208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:23:39.111248 kubelet[2776]: E1101 00:23:39.111144 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:23:39.420062 containerd[1598]: time="2025-11-01T00:23:39.419862923Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:23:39.421307 containerd[1598]: time="2025-11-01T00:23:39.421261538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:23:39.421400 containerd[1598]: time="2025-11-01T00:23:39.421351582Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:23:39.421655 kubelet[2776]: E1101 00:23:39.421592 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:39.421732 kubelet[2776]: E1101 00:23:39.421672 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:23:39.421905 kubelet[2776]: E1101 00:23:39.421860 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpht2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9jkmd_calico-system(80ec35e2-7ac0-4d9e-82fe-2398651b9031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:23:39.423163 kubelet[2776]: E1101 00:23:39.423083 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:23:42.696354 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:44840.service - OpenSSH per-connection server daemon (10.0.0.1:44840). Nov 1 00:23:42.751238 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 44840 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:42.753407 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:42.758519 systemd-logind[1575]: New session 17 of user core. Nov 1 00:23:42.770245 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 1 00:23:42.935525 sshd[5237]: Connection closed by 10.0.0.1 port 44840 Nov 1 00:23:42.935921 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:42.941473 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:44840.service: Deactivated successfully. Nov 1 00:23:42.944286 systemd[1]: session-17.scope: Deactivated successfully. Nov 1 00:23:42.945453 systemd-logind[1575]: Session 17 logged out. Waiting for processes to exit. Nov 1 00:23:42.947958 systemd-logind[1575]: Removed session 17. Nov 1 00:23:47.952792 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:41384.service - OpenSSH per-connection server daemon (10.0.0.1:41384). Nov 1 00:23:48.020194 sshd[5259]: Accepted publickey for core from 10.0.0.1 port 41384 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:48.022092 sshd-session[5259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:48.028524 systemd-logind[1575]: New session 18 of user core. Nov 1 00:23:48.035106 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 1 00:23:48.161841 sshd[5262]: Connection closed by 10.0.0.1 port 41384 Nov 1 00:23:48.162220 sshd-session[5259]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:48.166874 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:41384.service: Deactivated successfully. Nov 1 00:23:48.169190 systemd[1]: session-18.scope: Deactivated successfully. Nov 1 00:23:48.170136 systemd-logind[1575]: Session 18 logged out. Waiting for processes to exit. Nov 1 00:23:48.171437 systemd-logind[1575]: Removed session 18. Nov 1 00:23:48.752703 kubelet[2776]: E1101 00:23:48.752614 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:23:50.752327 kubelet[2776]: E1101 00:23:50.752272 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:23:51.752966 kubelet[2776]: E1101 00:23:51.752849 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:23:52.752420 kubelet[2776]: E1101 00:23:52.751970 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:23:52.759151 kubelet[2776]: E1101 00:23:52.759083 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:23:53.180075 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:41392.service - OpenSSH per-connection server daemon (10.0.0.1:41392). Nov 1 00:23:53.233328 sshd[5294]: Accepted publickey for core from 10.0.0.1 port 41392 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:53.236423 sshd-session[5294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:53.248346 systemd-logind[1575]: New session 19 of user core. Nov 1 00:23:53.258260 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 1 00:23:53.265632 containerd[1598]: time="2025-11-01T00:23:53.265569874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a1f9f9904bfb5fa7441f7374ddf19cdd14a9f395c7178d20e6d4dcf6740d858\" id:\"d2ad7f336dde6ab448a850ecbe1efd7d55b611137d47d201a55698d5f7a6c6aa\" pid:5288 exited_at:{seconds:1761956633 nanos:265019725}" Nov 1 00:23:53.268979 kubelet[2776]: E1101 00:23:53.268908 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:23:53.439413 sshd[5304]: Connection closed by 10.0.0.1 port 41392 Nov 1 00:23:53.439971 sshd-session[5294]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:53.452796 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:41392.service: Deactivated successfully. Nov 1 00:23:53.454964 systemd[1]: session-19.scope: Deactivated successfully. Nov 1 00:23:53.456056 systemd-logind[1575]: Session 19 logged out. Waiting for processes to exit. Nov 1 00:23:53.458009 systemd-logind[1575]: Removed session 19. Nov 1 00:23:53.459829 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:41406.service - OpenSSH per-connection server daemon (10.0.0.1:41406). Nov 1 00:23:53.531460 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 41406 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:53.533506 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:53.539500 systemd-logind[1575]: New session 20 of user core. Nov 1 00:23:53.549283 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 1 00:23:53.751542 kubelet[2776]: E1101 00:23:53.751367 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:23:53.908645 sshd[5320]: Connection closed by 10.0.0.1 port 41406 Nov 1 00:23:53.910191 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:53.920842 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:41406.service: Deactivated successfully. Nov 1 00:23:53.923325 systemd[1]: session-20.scope: Deactivated successfully. Nov 1 00:23:53.924408 systemd-logind[1575]: Session 20 logged out. Waiting for processes to exit. Nov 1 00:23:53.929570 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:41412.service - OpenSSH per-connection server daemon (10.0.0.1:41412). Nov 1 00:23:53.930724 systemd-logind[1575]: Removed session 20. Nov 1 00:23:53.999195 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 41412 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:54.002052 sshd-session[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:54.009449 systemd-logind[1575]: New session 21 of user core. Nov 1 00:23:54.018192 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 1 00:23:54.725612 sshd[5335]: Connection closed by 10.0.0.1 port 41412 Nov 1 00:23:54.726764 sshd-session[5332]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:54.742584 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:41412.service: Deactivated successfully. Nov 1 00:23:54.746318 systemd[1]: session-21.scope: Deactivated successfully. Nov 1 00:23:54.749920 systemd-logind[1575]: Session 21 logged out. Waiting for processes to exit. Nov 1 00:23:54.760278 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:41416.service - OpenSSH per-connection server daemon (10.0.0.1:41416). Nov 1 00:23:54.761979 systemd-logind[1575]: Removed session 21. Nov 1 00:23:54.833165 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 41416 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:54.836508 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:54.846632 systemd-logind[1575]: New session 22 of user core. Nov 1 00:23:54.862408 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 1 00:23:55.234636 sshd[5360]: Connection closed by 10.0.0.1 port 41416 Nov 1 00:23:55.235159 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.246197 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:41416.service: Deactivated successfully. Nov 1 00:23:55.249604 systemd[1]: session-22.scope: Deactivated successfully. Nov 1 00:23:55.252209 systemd-logind[1575]: Session 22 logged out. Waiting for processes to exit. Nov 1 00:23:55.255613 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:41430.service - OpenSSH per-connection server daemon (10.0.0.1:41430). Nov 1 00:23:55.256564 systemd-logind[1575]: Removed session 22. Nov 1 00:23:55.328361 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 41430 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:23:55.330507 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:23:55.338454 systemd-logind[1575]: New session 23 of user core. Nov 1 00:23:55.347210 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 1 00:23:55.578973 sshd[5375]: Connection closed by 10.0.0.1 port 41430 Nov 1 00:23:55.579389 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Nov 1 00:23:55.584554 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:41430.service: Deactivated successfully. Nov 1 00:23:55.587251 systemd[1]: session-23.scope: Deactivated successfully. Nov 1 00:23:55.588156 systemd-logind[1575]: Session 23 logged out. Waiting for processes to exit. Nov 1 00:23:55.589393 systemd-logind[1575]: Removed session 23. Nov 1 00:23:59.752859 containerd[1598]: time="2025-11-01T00:23:59.752776244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 1 00:24:00.393686 containerd[1598]: time="2025-11-01T00:24:00.393623101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:00.485413 containerd[1598]: time="2025-11-01T00:24:00.485302091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 1 00:24:00.485632 containerd[1598]: time="2025-11-01T00:24:00.485302272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 1 00:24:00.485807 kubelet[2776]: E1101 00:24:00.485719 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:00.486418 kubelet[2776]: E1101 00:24:00.485815 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 1 00:24:00.486418 kubelet[2776]: E1101 00:24:00.486097 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:34c059f2a6b44c5584ae5b7b85878e40,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:00.488292 containerd[1598]: time="2025-11-01T00:24:00.488239888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 1 00:24:00.592027 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:42980.service - OpenSSH per-connection server daemon (10.0.0.1:42980). Nov 1 00:24:00.680539 sshd[5388]: Accepted publickey for core from 10.0.0.1 port 42980 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:24:00.682602 sshd-session[5388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:00.688834 systemd-logind[1575]: New session 24 of user core. Nov 1 00:24:00.707114 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 1 00:24:00.846596 sshd[5391]: Connection closed by 10.0.0.1 port 42980 Nov 1 00:24:00.848223 sshd-session[5388]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:00.854557 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:42980.service: Deactivated successfully. Nov 1 00:24:00.857507 systemd[1]: session-24.scope: Deactivated successfully. Nov 1 00:24:00.859116 systemd-logind[1575]: Session 24 logged out. Waiting for processes to exit. Nov 1 00:24:00.860782 systemd-logind[1575]: Removed session 24. Nov 1 00:24:00.888538 containerd[1598]: time="2025-11-01T00:24:00.888465134Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:00.889920 containerd[1598]: time="2025-11-01T00:24:00.889861079Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 1 00:24:00.890068 containerd[1598]: time="2025-11-01T00:24:00.889960197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:00.890294 kubelet[2776]: E1101 00:24:00.890163 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:00.890294 kubelet[2776]: E1101 00:24:00.890224 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 1 00:24:00.890423 kubelet[2776]: E1101 00:24:00.890352 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2lkqz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6c77b7cd5b-rlkpf_calico-system(bcc9bae5-bbac-4d40-8ba9-b09cdd29d916): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:00.891608 kubelet[2776]: E1101 00:24:00.891553 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:24:01.752311 containerd[1598]: time="2025-11-01T00:24:01.752198331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 1 00:24:02.072486 containerd[1598]: time="2025-11-01T00:24:02.072434251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:02.073847 containerd[1598]: time="2025-11-01T00:24:02.073776491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 1 00:24:02.073972 containerd[1598]: time="2025-11-01T00:24:02.073792482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 1 00:24:02.074170 kubelet[2776]: E1101 00:24:02.074120 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:02.074528 kubelet[2776]: E1101 00:24:02.074185 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 1 00:24:02.074528 kubelet[2776]: E1101 00:24:02.074349 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2gq2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7db858884d-rlxtg_calico-system(f1b660e9-a196-4e94-8db8-ec0d5d3642c8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:02.075532 kubelet[2776]: E1101 00:24:02.075488 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:24:03.751129 kubelet[2776]: E1101 00:24:03.751059 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:04.753416 containerd[1598]: time="2025-11-01T00:24:04.753143777Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:05.065354 containerd[1598]: time="2025-11-01T00:24:05.065273010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:05.066667 containerd[1598]: time="2025-11-01T00:24:05.066629666Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:05.066892 containerd[1598]: time="2025-11-01T00:24:05.066736600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:05.066997 kubelet[2776]: E1101 00:24:05.066946 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:05.067349 kubelet[2776]: E1101 00:24:05.067014 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:05.067349 kubelet[2776]: E1101 00:24:05.067164 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jqmjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-bf7v4_calico-apiserver(0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:05.068441 kubelet[2776]: E1101 00:24:05.068403 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-bf7v4" podUID="0bca4d2d-6dfb-4f38-ab3b-dd64e533f1bf" Nov 1 00:24:05.752289 containerd[1598]: time="2025-11-01T00:24:05.752219243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 1 00:24:05.865731 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:42996.service - OpenSSH per-connection server daemon (10.0.0.1:42996). Nov 1 00:24:05.926756 sshd[5406]: Accepted publickey for core from 10.0.0.1 port 42996 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:24:05.928397 sshd-session[5406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:05.933494 systemd-logind[1575]: New session 25 of user core. Nov 1 00:24:05.947137 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 1 00:24:06.078241 containerd[1598]: time="2025-11-01T00:24:06.078163054Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:06.106646 containerd[1598]: time="2025-11-01T00:24:06.106538889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 1 00:24:06.106646 containerd[1598]: time="2025-11-01T00:24:06.106613040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:06.106914 kubelet[2776]: E1101 00:24:06.106858 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:06.107382 kubelet[2776]: E1101 00:24:06.107003 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 1 00:24:06.107900 containerd[1598]: time="2025-11-01T00:24:06.107464586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 1 00:24:06.108203 kubelet[2776]: E1101 00:24:06.107403 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gpht2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9jkmd_calico-system(80ec35e2-7ac0-4d9e-82fe-2398651b9031): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:06.108520 sshd[5409]: Connection closed by 10.0.0.1 port 42996 Nov 1 00:24:06.108660 sshd-session[5406]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:06.109144 kubelet[2776]: E1101 00:24:06.109093 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9jkmd" podUID="80ec35e2-7ac0-4d9e-82fe-2398651b9031" Nov 1 00:24:06.116135 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:42996.service: Deactivated successfully. Nov 1 00:24:06.119535 systemd[1]: session-25.scope: Deactivated successfully. Nov 1 00:24:06.120966 systemd-logind[1575]: Session 25 logged out. Waiting for processes to exit. Nov 1 00:24:06.124293 systemd-logind[1575]: Removed session 25. Nov 1 00:24:06.433008 containerd[1598]: time="2025-11-01T00:24:06.432728328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:06.434344 containerd[1598]: time="2025-11-01T00:24:06.434297137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 1 00:24:06.434445 containerd[1598]: time="2025-11-01T00:24:06.434383020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 1 00:24:06.434647 kubelet[2776]: E1101 00:24:06.434584 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.434703 kubelet[2776]: E1101 00:24:06.434656 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 1 00:24:06.434912 kubelet[2776]: E1101 00:24:06.434850 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5mhhj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-65dff998bf-kplcg_calico-apiserver(8e740e5a-3e3f-487f-be71-f50848ddb11c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:06.436168 kubelet[2776]: E1101 00:24:06.436117 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c" Nov 1 00:24:06.753024 containerd[1598]: time="2025-11-01T00:24:06.752809803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 1 00:24:07.064621 containerd[1598]: time="2025-11-01T00:24:07.064523041Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:07.066287 containerd[1598]: time="2025-11-01T00:24:07.066191558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 1 00:24:07.066452 containerd[1598]: time="2025-11-01T00:24:07.066198501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 1 00:24:07.066536 kubelet[2776]: E1101 00:24:07.066488 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:07.066655 kubelet[2776]: E1101 00:24:07.066550 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 1 00:24:07.066777 kubelet[2776]: E1101 00:24:07.066727 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:07.069639 containerd[1598]: time="2025-11-01T00:24:07.068977946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 1 00:24:07.383911 containerd[1598]: time="2025-11-01T00:24:07.383731209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 1 00:24:07.385167 containerd[1598]: time="2025-11-01T00:24:07.385099986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 1 00:24:07.385267 containerd[1598]: time="2025-11-01T00:24:07.385156925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 1 00:24:07.385554 kubelet[2776]: E1101 00:24:07.385478 2776 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:07.385886 kubelet[2776]: E1101 00:24:07.385563 2776 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 1 00:24:07.385886 kubelet[2776]: E1101 00:24:07.385741 2776 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-28xl7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bwpmn_calico-system(9cb8b2d7-16ad-4489-b82c-4e442c6904d5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 1 00:24:07.387042 kubelet[2776]: E1101 00:24:07.386970 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bwpmn" podUID="9cb8b2d7-16ad-4489-b82c-4e442c6904d5" Nov 1 00:24:07.750842 kubelet[2776]: E1101 00:24:07.750680 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:11.126192 systemd[1]: Started sshd@25-10.0.0.116:22-10.0.0.1:43540.service - OpenSSH per-connection server daemon (10.0.0.1:43540). Nov 1 00:24:11.187725 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 43540 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:24:11.189440 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:11.195005 systemd-logind[1575]: New session 26 of user core. Nov 1 00:24:11.203097 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 1 00:24:11.334431 sshd[5433]: Connection closed by 10.0.0.1 port 43540 Nov 1 00:24:11.334890 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:11.340731 systemd[1]: sshd@25-10.0.0.116:22-10.0.0.1:43540.service: Deactivated successfully. Nov 1 00:24:11.343046 systemd[1]: session-26.scope: Deactivated successfully. Nov 1 00:24:11.344201 systemd-logind[1575]: Session 26 logged out. Waiting for processes to exit. Nov 1 00:24:11.345713 systemd-logind[1575]: Removed session 26. Nov 1 00:24:11.751594 kubelet[2776]: E1101 00:24:11.751533 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:12.751183 kubelet[2776]: E1101 00:24:12.751113 2776 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 1 00:24:13.753161 kubelet[2776]: E1101 00:24:13.753072 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6c77b7cd5b-rlkpf" podUID="bcc9bae5-bbac-4d40-8ba9-b09cdd29d916" Nov 1 00:24:14.756403 kubelet[2776]: E1101 00:24:14.756335 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7db858884d-rlxtg" podUID="f1b660e9-a196-4e94-8db8-ec0d5d3642c8" Nov 1 00:24:16.353728 systemd[1]: Started sshd@26-10.0.0.116:22-10.0.0.1:40482.service - OpenSSH per-connection server daemon (10.0.0.1:40482). Nov 1 00:24:16.479602 sshd[5448]: Accepted publickey for core from 10.0.0.1 port 40482 ssh2: RSA SHA256:ejpXjL08eXwq5E+RKrHGlM9AwE1NxRVT+vpv8k52wss Nov 1 00:24:16.481721 sshd-session[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 1 00:24:16.496621 systemd-logind[1575]: New session 27 of user core. Nov 1 00:24:16.508339 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 1 00:24:16.691481 sshd[5451]: Connection closed by 10.0.0.1 port 40482 Nov 1 00:24:16.692256 sshd-session[5448]: pam_unix(sshd:session): session closed for user core Nov 1 00:24:16.698273 systemd-logind[1575]: Session 27 logged out. Waiting for processes to exit. Nov 1 00:24:16.698560 systemd[1]: sshd@26-10.0.0.116:22-10.0.0.1:40482.service: Deactivated successfully. Nov 1 00:24:16.702138 systemd[1]: session-27.scope: Deactivated successfully. Nov 1 00:24:16.709109 systemd-logind[1575]: Removed session 27. Nov 1 00:24:17.751977 kubelet[2776]: E1101 00:24:17.751901 2776 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-65dff998bf-kplcg" podUID="8e740e5a-3e3f-487f-be71-f50848ddb11c"