Nov 5 15:52:16.422180 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT_DYNAMIC Wed Nov 5 13:45:21 -00 2025 Nov 5 15:52:16.422255 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:52:16.423511 kernel: BIOS-provided physical RAM map: Nov 5 15:52:16.423522 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 15:52:16.423530 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 15:52:16.423537 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 15:52:16.423545 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 15:52:16.423553 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 15:52:16.423565 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 15:52:16.423573 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 15:52:16.423584 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Nov 5 15:52:16.423591 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 15:52:16.423599 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 15:52:16.423606 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 15:52:16.423617 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 15:52:16.423627 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 15:52:16.423638 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 15:52:16.423646 kernel: BIOS-e820: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 15:52:16.423653 kernel: BIOS-e820: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 15:52:16.423661 kernel: BIOS-e820: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 15:52:16.423675 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 15:52:16.423690 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 15:52:16.423703 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:52:16.423713 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:52:16.423728 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 15:52:16.423746 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:52:16.423756 kernel: NX (Execute Disable) protection: active Nov 5 15:52:16.423766 kernel: APIC: Static calls initialized Nov 5 15:52:16.423776 kernel: e820: update [mem 0x9b319018-0x9b322c57] usable ==> usable Nov 5 15:52:16.423786 kernel: e820: update [mem 0x9b2dc018-0x9b318e57] usable ==> usable Nov 5 15:52:16.423796 kernel: extended physical RAM map: Nov 5 15:52:16.423804 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 5 15:52:16.423811 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Nov 5 15:52:16.423819 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Nov 5 15:52:16.423827 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Nov 5 15:52:16.423835 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Nov 5 15:52:16.423846 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Nov 5 15:52:16.423853 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Nov 5 15:52:16.423861 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b2dc017] usable Nov 5 15:52:16.423869 kernel: reserve setup_data: [mem 0x000000009b2dc018-0x000000009b318e57] usable Nov 5 15:52:16.423880 kernel: reserve setup_data: [mem 0x000000009b318e58-0x000000009b319017] usable Nov 5 15:52:16.423890 kernel: reserve setup_data: [mem 0x000000009b319018-0x000000009b322c57] usable Nov 5 15:52:16.423898 kernel: reserve setup_data: [mem 0x000000009b322c58-0x000000009bd3efff] usable Nov 5 15:52:16.423906 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Nov 5 15:52:16.423914 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Nov 5 15:52:16.423922 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Nov 5 15:52:16.423930 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Nov 5 15:52:16.423939 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Nov 5 15:52:16.423947 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce90fff] usable Nov 5 15:52:16.423957 kernel: reserve setup_data: [mem 0x000000009ce91000-0x000000009ce94fff] reserved Nov 5 15:52:16.423965 kernel: reserve setup_data: [mem 0x000000009ce95000-0x000000009ce96fff] ACPI NVS Nov 5 15:52:16.423973 kernel: reserve setup_data: [mem 0x000000009ce97000-0x000000009cedbfff] usable Nov 5 15:52:16.423981 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Nov 5 15:52:16.423988 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Nov 5 15:52:16.423996 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Nov 5 15:52:16.424004 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Nov 5 15:52:16.424012 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Nov 5 15:52:16.424020 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Nov 5 15:52:16.424032 kernel: efi: EFI v2.7 by EDK II Nov 5 15:52:16.424041 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b9e4198 RNG=0x9cb73018 Nov 5 15:52:16.424051 kernel: random: crng init done Nov 5 15:52:16.424062 kernel: efi: Remove mem150: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Nov 5 15:52:16.424070 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Nov 5 15:52:16.424080 kernel: secureboot: Secure boot disabled Nov 5 15:52:16.424088 kernel: SMBIOS 2.8 present. Nov 5 15:52:16.424096 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Nov 5 15:52:16.424104 kernel: DMI: Memory slots populated: 1/1 Nov 5 15:52:16.424112 kernel: Hypervisor detected: KVM Nov 5 15:52:16.424120 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 15:52:16.424128 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Nov 5 15:52:16.424136 kernel: kvm-clock: using sched offset of 9045269797 cycles Nov 5 15:52:16.424148 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Nov 5 15:52:16.424157 kernel: tsc: Detected 2794.748 MHz processor Nov 5 15:52:16.424166 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 5 15:52:16.424181 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 5 15:52:16.424199 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Nov 5 15:52:16.424211 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 5 15:52:16.424223 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 5 15:52:16.424239 kernel: Using GB pages for direct mapping Nov 5 15:52:16.424249 kernel: ACPI: Early table checksum verification disabled Nov 5 15:52:16.424258 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Nov 5 15:52:16.424281 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Nov 5 15:52:16.424289 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424298 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424307 kernel: ACPI: FACS 0x000000009CBDD000 000040 Nov 5 15:52:16.424331 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424343 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424351 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424360 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 15:52:16.424369 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Nov 5 15:52:16.424377 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Nov 5 15:52:16.424386 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Nov 5 15:52:16.424394 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Nov 5 15:52:16.424405 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Nov 5 15:52:16.424414 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Nov 5 15:52:16.424422 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Nov 5 15:52:16.424431 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Nov 5 15:52:16.424439 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Nov 5 15:52:16.424447 kernel: No NUMA configuration found Nov 5 15:52:16.424456 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Nov 5 15:52:16.424467 kernel: NODE_DATA(0) allocated [mem 0x9ce36dc0-0x9ce3dfff] Nov 5 15:52:16.424475 kernel: Zone ranges: Nov 5 15:52:16.424484 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 5 15:52:16.424493 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Nov 5 15:52:16.424501 kernel: Normal empty Nov 5 15:52:16.424510 kernel: Device empty Nov 5 15:52:16.424518 kernel: Movable zone start for each node Nov 5 15:52:16.424529 kernel: Early memory node ranges Nov 5 15:52:16.424537 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 5 15:52:16.424549 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Nov 5 15:52:16.424558 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Nov 5 15:52:16.424566 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Nov 5 15:52:16.424575 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Nov 5 15:52:16.424583 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Nov 5 15:52:16.424592 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce90fff] Nov 5 15:52:16.424602 kernel: node 0: [mem 0x000000009ce97000-0x000000009cedbfff] Nov 5 15:52:16.424613 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Nov 5 15:52:16.424622 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:52:16.424642 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 5 15:52:16.424656 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Nov 5 15:52:16.424667 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 5 15:52:16.424683 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Nov 5 15:52:16.424706 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Nov 5 15:52:16.424721 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Nov 5 15:52:16.424741 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Nov 5 15:52:16.424754 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Nov 5 15:52:16.424765 kernel: ACPI: PM-Timer IO Port: 0x608 Nov 5 15:52:16.424774 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Nov 5 15:52:16.424785 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Nov 5 15:52:16.424794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Nov 5 15:52:16.424802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Nov 5 15:52:16.424811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 5 15:52:16.424820 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Nov 5 15:52:16.424829 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Nov 5 15:52:16.424837 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 5 15:52:16.424848 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Nov 5 15:52:16.424857 kernel: TSC deadline timer available Nov 5 15:52:16.424866 kernel: CPU topo: Max. logical packages: 1 Nov 5 15:52:16.424875 kernel: CPU topo: Max. logical dies: 1 Nov 5 15:52:16.424884 kernel: CPU topo: Max. dies per package: 1 Nov 5 15:52:16.424893 kernel: CPU topo: Max. threads per core: 1 Nov 5 15:52:16.424902 kernel: CPU topo: Num. cores per package: 4 Nov 5 15:52:16.424912 kernel: CPU topo: Num. threads per package: 4 Nov 5 15:52:16.424923 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Nov 5 15:52:16.424932 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Nov 5 15:52:16.424940 kernel: kvm-guest: KVM setup pv remote TLB flush Nov 5 15:52:16.424950 kernel: kvm-guest: setup PV sched yield Nov 5 15:52:16.424959 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Nov 5 15:52:16.424968 kernel: Booting paravirtualized kernel on KVM Nov 5 15:52:16.424977 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 5 15:52:16.424988 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Nov 5 15:52:16.424996 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Nov 5 15:52:16.425005 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Nov 5 15:52:16.425013 kernel: pcpu-alloc: [0] 0 1 2 3 Nov 5 15:52:16.425022 kernel: kvm-guest: PV spinlocks enabled Nov 5 15:52:16.425031 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 5 15:52:16.425045 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:52:16.425057 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:52:16.425066 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:52:16.425074 kernel: Fallback order for Node 0: 0 Nov 5 15:52:16.425083 kernel: Built 1 zonelists, mobility grouping on. Total pages: 641450 Nov 5 15:52:16.425092 kernel: Policy zone: DMA32 Nov 5 15:52:16.425100 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:52:16.425112 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 15:52:16.425120 kernel: ftrace: allocating 40092 entries in 157 pages Nov 5 15:52:16.425129 kernel: ftrace: allocated 157 pages with 5 groups Nov 5 15:52:16.425137 kernel: Dynamic Preempt: voluntary Nov 5 15:52:16.425146 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:52:16.425155 kernel: rcu: RCU event tracing is enabled. Nov 5 15:52:16.425165 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 15:52:16.425174 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:52:16.425194 kernel: Rude variant of Tasks RCU enabled. Nov 5 15:52:16.425208 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:52:16.425220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:52:16.425232 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 15:52:16.425253 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:52:16.425278 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:52:16.425290 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 15:52:16.425307 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Nov 5 15:52:16.425335 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:52:16.425348 kernel: Console: colour dummy device 80x25 Nov 5 15:52:16.425361 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:52:16.425373 kernel: ACPI: Core revision 20240827 Nov 5 15:52:16.425386 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Nov 5 15:52:16.425399 kernel: APIC: Switch to symmetric I/O mode setup Nov 5 15:52:16.425415 kernel: x2apic enabled Nov 5 15:52:16.425426 kernel: APIC: Switched APIC routing to: physical x2apic Nov 5 15:52:16.425437 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Nov 5 15:52:16.425448 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Nov 5 15:52:16.425459 kernel: kvm-guest: setup PV IPIs Nov 5 15:52:16.425471 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Nov 5 15:52:16.425483 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:52:16.425498 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Nov 5 15:52:16.425510 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Nov 5 15:52:16.425522 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Nov 5 15:52:16.425534 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Nov 5 15:52:16.425547 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 5 15:52:16.425559 kernel: Spectre V2 : Mitigation: Retpolines Nov 5 15:52:16.425570 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Nov 5 15:52:16.425586 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Nov 5 15:52:16.425598 kernel: active return thunk: retbleed_return_thunk Nov 5 15:52:16.425611 kernel: RETBleed: Mitigation: untrained return thunk Nov 5 15:52:16.425628 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Nov 5 15:52:16.425641 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Nov 5 15:52:16.425653 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Nov 5 15:52:16.425667 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Nov 5 15:52:16.425683 kernel: active return thunk: srso_return_thunk Nov 5 15:52:16.425696 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Nov 5 15:52:16.425709 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 5 15:52:16.425721 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 5 15:52:16.425735 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 5 15:52:16.425748 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 5 15:52:16.425761 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Nov 5 15:52:16.425781 kernel: Freeing SMP alternatives memory: 32K Nov 5 15:52:16.425794 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:52:16.425806 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:52:16.425818 kernel: landlock: Up and running. Nov 5 15:52:16.425830 kernel: SELinux: Initializing. Nov 5 15:52:16.425842 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:52:16.425854 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:52:16.425872 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Nov 5 15:52:16.425886 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Nov 5 15:52:16.425899 kernel: ... version: 0 Nov 5 15:52:16.425912 kernel: ... bit width: 48 Nov 5 15:52:16.425925 kernel: ... generic registers: 6 Nov 5 15:52:16.425938 kernel: ... value mask: 0000ffffffffffff Nov 5 15:52:16.425950 kernel: ... max period: 00007fffffffffff Nov 5 15:52:16.425966 kernel: ... fixed-purpose events: 0 Nov 5 15:52:16.425979 kernel: ... event mask: 000000000000003f Nov 5 15:52:16.425991 kernel: signal: max sigframe size: 1776 Nov 5 15:52:16.426003 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:52:16.426017 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:52:16.426038 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:52:16.426050 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:52:16.426067 kernel: smpboot: x86: Booting SMP configuration: Nov 5 15:52:16.426080 kernel: .... node #0, CPUs: #1 #2 #3 Nov 5 15:52:16.426092 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 15:52:16.426104 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Nov 5 15:52:16.426117 kernel: Memory: 2445192K/2565800K available (14336K kernel code, 2443K rwdata, 26064K rodata, 15964K init, 2080K bss, 114668K reserved, 0K cma-reserved) Nov 5 15:52:16.426129 kernel: devtmpfs: initialized Nov 5 15:52:16.426141 kernel: x86/mm: Memory block size: 128MB Nov 5 15:52:16.426156 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Nov 5 15:52:16.426169 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Nov 5 15:52:16.426181 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Nov 5 15:52:16.426192 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Nov 5 15:52:16.426204 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce95000-0x9ce96fff] (8192 bytes) Nov 5 15:52:16.426217 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Nov 5 15:52:16.426230 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:52:16.426248 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 15:52:16.426276 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:52:16.426289 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:52:16.426302 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:52:16.426332 kernel: audit: type=2000 audit(1762357928.524:1): state=initialized audit_enabled=0 res=1 Nov 5 15:52:16.426346 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:52:16.426358 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 5 15:52:16.426375 kernel: cpuidle: using governor menu Nov 5 15:52:16.426389 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:52:16.426402 kernel: dca service started, version 1.12.1 Nov 5 15:52:16.426414 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Nov 5 15:52:16.426426 kernel: PCI: Using configuration type 1 for base access Nov 5 15:52:16.426439 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 5 15:52:16.426452 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:52:16.426469 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:52:16.426482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:52:16.426494 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:52:16.426507 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:52:16.426520 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:52:16.426533 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:52:16.426545 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:52:16.426562 kernel: ACPI: Interpreter enabled Nov 5 15:52:16.426575 kernel: ACPI: PM: (supports S0 S3 S5) Nov 5 15:52:16.426589 kernel: ACPI: Using IOAPIC for interrupt routing Nov 5 15:52:16.426603 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 5 15:52:16.426617 kernel: PCI: Using E820 reservations for host bridge windows Nov 5 15:52:16.426630 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Nov 5 15:52:16.426644 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 15:52:16.427124 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:52:16.427435 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Nov 5 15:52:16.427683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Nov 5 15:52:16.427702 kernel: PCI host bridge to bus 0000:00 Nov 5 15:52:16.427986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Nov 5 15:52:16.428217 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Nov 5 15:52:16.428485 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Nov 5 15:52:16.428717 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Nov 5 15:52:16.428943 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Nov 5 15:52:16.429157 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:52:16.429409 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 15:52:16.429696 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:52:16.429955 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Nov 5 15:52:16.430178 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Nov 5 15:52:16.430472 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Nov 5 15:52:16.430706 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Nov 5 15:52:16.430965 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Nov 5 15:52:16.431256 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 15:52:16.431537 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Nov 5 15:52:16.431804 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Nov 5 15:52:16.432072 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Nov 5 15:52:16.432407 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Nov 5 15:52:16.432668 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Nov 5 15:52:16.432937 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Nov 5 15:52:16.433198 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Nov 5 15:52:16.433510 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Nov 5 15:52:16.433780 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Nov 5 15:52:16.434049 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Nov 5 15:52:16.434346 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Nov 5 15:52:16.434598 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Nov 5 15:52:16.434880 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Nov 5 15:52:16.435133 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Nov 5 15:52:16.435442 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Nov 5 15:52:16.435701 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Nov 5 15:52:16.435978 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Nov 5 15:52:16.436241 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Nov 5 15:52:16.436514 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Nov 5 15:52:16.436535 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Nov 5 15:52:16.436555 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Nov 5 15:52:16.436567 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Nov 5 15:52:16.436587 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Nov 5 15:52:16.436598 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Nov 5 15:52:16.436609 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Nov 5 15:52:16.436620 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Nov 5 15:52:16.436632 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Nov 5 15:52:16.436643 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Nov 5 15:52:16.436654 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Nov 5 15:52:16.436669 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Nov 5 15:52:16.436681 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Nov 5 15:52:16.436693 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Nov 5 15:52:16.436705 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Nov 5 15:52:16.436717 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Nov 5 15:52:16.436728 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Nov 5 15:52:16.436742 kernel: iommu: Default domain type: Translated Nov 5 15:52:16.436758 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 5 15:52:16.436770 kernel: efivars: Registered efivars operations Nov 5 15:52:16.436782 kernel: PCI: Using ACPI for IRQ routing Nov 5 15:52:16.436794 kernel: PCI: pci_cache_line_size set to 64 bytes Nov 5 15:52:16.436806 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Nov 5 15:52:16.436817 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Nov 5 15:52:16.436828 kernel: e820: reserve RAM buffer [mem 0x9b2dc018-0x9bffffff] Nov 5 15:52:16.436844 kernel: e820: reserve RAM buffer [mem 0x9b319018-0x9bffffff] Nov 5 15:52:16.436856 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Nov 5 15:52:16.436868 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Nov 5 15:52:16.436881 kernel: e820: reserve RAM buffer [mem 0x9ce91000-0x9fffffff] Nov 5 15:52:16.436893 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Nov 5 15:52:16.437148 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Nov 5 15:52:16.437426 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Nov 5 15:52:16.437651 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Nov 5 15:52:16.437669 kernel: vgaarb: loaded Nov 5 15:52:16.437682 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Nov 5 15:52:16.437694 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Nov 5 15:52:16.437706 kernel: clocksource: Switched to clocksource kvm-clock Nov 5 15:52:16.437718 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:52:16.437730 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:52:16.437748 kernel: pnp: PnP ACPI init Nov 5 15:52:16.438040 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Nov 5 15:52:16.438067 kernel: pnp: PnP ACPI: found 6 devices Nov 5 15:52:16.438080 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 5 15:52:16.438093 kernel: NET: Registered PF_INET protocol family Nov 5 15:52:16.438105 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:52:16.438122 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:52:16.438135 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:52:16.438148 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:52:16.438161 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:52:16.438174 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:52:16.438187 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:52:16.438200 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:52:16.438218 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:52:16.438231 kernel: NET: Registered PF_XDP protocol family Nov 5 15:52:16.438502 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Nov 5 15:52:16.438738 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Nov 5 15:52:16.438980 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Nov 5 15:52:16.439188 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Nov 5 15:52:16.439454 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Nov 5 15:52:16.439664 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Nov 5 15:52:16.439876 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Nov 5 15:52:16.440084 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Nov 5 15:52:16.440103 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:52:16.440116 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Nov 5 15:52:16.440136 kernel: Initialise system trusted keyrings Nov 5 15:52:16.440149 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:52:16.440161 kernel: Key type asymmetric registered Nov 5 15:52:16.440173 kernel: Asymmetric key parser 'x509' registered Nov 5 15:52:16.440185 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 5 15:52:16.440201 kernel: io scheduler mq-deadline registered Nov 5 15:52:16.440213 kernel: io scheduler kyber registered Nov 5 15:52:16.440225 kernel: io scheduler bfq registered Nov 5 15:52:16.440237 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 5 15:52:16.440250 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Nov 5 15:52:16.440274 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Nov 5 15:52:16.440287 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Nov 5 15:52:16.440299 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:52:16.440344 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 5 15:52:16.440356 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Nov 5 15:52:16.440369 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Nov 5 15:52:16.440381 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Nov 5 15:52:16.440635 kernel: rtc_cmos 00:04: RTC can wake from S4 Nov 5 15:52:16.440657 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Nov 5 15:52:16.440890 kernel: rtc_cmos 00:04: registered as rtc0 Nov 5 15:52:16.441122 kernel: rtc_cmos 00:04: setting system clock to 2025-11-05T15:52:10 UTC (1762357930) Nov 5 15:52:16.441443 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Nov 5 15:52:16.441463 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Nov 5 15:52:16.441482 kernel: efifb: probing for efifb Nov 5 15:52:16.441494 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Nov 5 15:52:16.441506 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Nov 5 15:52:16.441520 kernel: efifb: scrolling: redraw Nov 5 15:52:16.441531 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 5 15:52:16.441543 kernel: Console: switching to colour frame buffer device 160x50 Nov 5 15:52:16.441554 kernel: fb0: EFI VGA frame buffer device Nov 5 15:52:16.441566 kernel: pstore: Using crash dump compression: deflate Nov 5 15:52:16.441577 kernel: pstore: Registered efi_pstore as persistent store backend Nov 5 15:52:16.441589 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:52:16.441602 kernel: Segment Routing with IPv6 Nov 5 15:52:16.441614 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:52:16.441625 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:52:16.441637 kernel: Key type dns_resolver registered Nov 5 15:52:16.441648 kernel: IPI shorthand broadcast: enabled Nov 5 15:52:16.441660 kernel: sched_clock: Marking stable (3354201525, 464738974)->(4032665958, -213725459) Nov 5 15:52:16.441672 kernel: registered taskstats version 1 Nov 5 15:52:16.441692 kernel: Loading compiled-in X.509 certificates Nov 5 15:52:16.441707 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 9f02cc8d588ce542f03b0da66dde47a90a145382' Nov 5 15:52:16.441720 kernel: Demotion targets for Node 0: null Nov 5 15:52:16.441732 kernel: Key type .fscrypt registered Nov 5 15:52:16.441745 kernel: Key type fscrypt-provisioning registered Nov 5 15:52:16.441759 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:52:16.441771 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:52:16.441782 kernel: ima: No architecture policies found Nov 5 15:52:16.441799 kernel: clk: Disabling unused clocks Nov 5 15:52:16.441811 kernel: Freeing unused kernel image (initmem) memory: 15964K Nov 5 15:52:16.441824 kernel: Write protecting the kernel read-only data: 40960k Nov 5 15:52:16.441836 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Nov 5 15:52:16.441848 kernel: Run /init as init process Nov 5 15:52:16.441860 kernel: with arguments: Nov 5 15:52:16.441872 kernel: /init Nov 5 15:52:16.441888 kernel: with environment: Nov 5 15:52:16.441900 kernel: HOME=/ Nov 5 15:52:16.441911 kernel: TERM=linux Nov 5 15:52:16.441923 kernel: SCSI subsystem initialized Nov 5 15:52:16.441941 kernel: libata version 3.00 loaded. Nov 5 15:52:16.442193 kernel: ahci 0000:00:1f.2: version 3.0 Nov 5 15:52:16.442214 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Nov 5 15:52:16.442495 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Nov 5 15:52:16.442734 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Nov 5 15:52:16.442999 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Nov 5 15:52:16.443307 kernel: scsi host0: ahci Nov 5 15:52:16.443638 kernel: scsi host1: ahci Nov 5 15:52:16.443944 kernel: scsi host2: ahci Nov 5 15:52:16.444248 kernel: scsi host3: ahci Nov 5 15:52:16.444604 kernel: scsi host4: ahci Nov 5 15:52:16.444922 kernel: scsi host5: ahci Nov 5 15:52:16.444942 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 26 lpm-pol 1 Nov 5 15:52:16.444954 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 26 lpm-pol 1 Nov 5 15:52:16.444974 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 26 lpm-pol 1 Nov 5 15:52:16.444985 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 26 lpm-pol 1 Nov 5 15:52:16.444997 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 26 lpm-pol 1 Nov 5 15:52:16.445009 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 26 lpm-pol 1 Nov 5 15:52:16.445021 kernel: ata1: SATA link down (SStatus 0 SControl 300) Nov 5 15:52:16.445032 kernel: ata6: SATA link down (SStatus 0 SControl 300) Nov 5 15:52:16.445044 kernel: ata4: SATA link down (SStatus 0 SControl 300) Nov 5 15:52:16.445057 kernel: ata2: SATA link down (SStatus 0 SControl 300) Nov 5 15:52:16.445076 kernel: ata5: SATA link down (SStatus 0 SControl 300) Nov 5 15:52:16.445091 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Nov 5 15:52:16.445106 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:52:16.445122 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Nov 5 15:52:16.445134 kernel: ata3.00: applying bridge limits Nov 5 15:52:16.445146 kernel: ata3.00: LPM support broken, forcing max_power Nov 5 15:52:16.445162 kernel: ata3.00: configured for UDMA/100 Nov 5 15:52:16.445471 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Nov 5 15:52:16.445807 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Nov 5 15:52:16.446064 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 15:52:16.446085 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:52:16.446099 kernel: GPT:16515071 != 27000831 Nov 5 15:52:16.446117 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:52:16.446129 kernel: GPT:16515071 != 27000831 Nov 5 15:52:16.446142 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:52:16.446154 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 15:52:16.446464 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Nov 5 15:52:16.446486 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 5 15:52:16.446768 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Nov 5 15:52:16.446800 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:52:16.446818 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:52:16.446837 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:52:16.446850 kernel: device-mapper: verity: sha256 using shash "sha256-generic" Nov 5 15:52:16.446869 kernel: raid6: avx2x4 gen() 19057 MB/s Nov 5 15:52:16.446883 kernel: raid6: avx2x2 gen() 17507 MB/s Nov 5 15:52:16.446895 kernel: raid6: avx2x1 gen() 13894 MB/s Nov 5 15:52:16.446912 kernel: raid6: using algorithm avx2x4 gen() 19057 MB/s Nov 5 15:52:16.446931 kernel: raid6: .... xor() 3982 MB/s, rmw enabled Nov 5 15:52:16.446945 kernel: raid6: using avx2x2 recovery algorithm Nov 5 15:52:16.446960 kernel: xor: automatically using best checksumming function avx Nov 5 15:52:16.446977 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:52:16.446989 kernel: BTRFS: device fsid a4c7be9c-39f6-471d-8a4c-d50144c6bf01 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (182) Nov 5 15:52:16.447000 kernel: BTRFS info (device dm-0): first mount of filesystem a4c7be9c-39f6-471d-8a4c-d50144c6bf01 Nov 5 15:52:16.447018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:52:16.447039 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:52:16.447052 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:52:16.447070 kernel: loop: module loaded Nov 5 15:52:16.447084 kernel: loop0: detected capacity change from 0 to 100120 Nov 5 15:52:16.447096 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:52:16.447110 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:52:16.447140 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:52:16.447155 systemd[1]: Detected virtualization kvm. Nov 5 15:52:16.447176 systemd[1]: Detected architecture x86-64. Nov 5 15:52:16.447189 systemd[1]: Running in initrd. Nov 5 15:52:16.447202 systemd[1]: No hostname configured, using default hostname. Nov 5 15:52:16.447214 systemd[1]: Hostname set to . Nov 5 15:52:16.447241 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:52:16.447255 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:52:16.447289 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:52:16.447302 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:52:16.447339 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:52:16.447357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:52:16.447383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:52:16.447397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:52:16.447418 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:52:16.447440 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:52:16.447455 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:52:16.447475 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:52:16.447502 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:52:16.447519 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:52:16.447539 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:52:16.447554 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:52:16.447573 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:52:16.447588 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:52:16.447600 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:52:16.447618 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:52:16.447639 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:52:16.447654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:52:16.447673 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:52:16.447694 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:52:16.447716 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:52:16.447740 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:52:16.447757 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:52:16.447777 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:52:16.447792 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:52:16.447804 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:52:16.447822 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:52:16.447834 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:52:16.447852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:16.447866 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:52:16.447882 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:52:16.447896 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:52:16.447913 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:52:16.447980 systemd-journald[319]: Collecting audit messages is disabled. Nov 5 15:52:16.448011 systemd-journald[319]: Journal started Nov 5 15:52:16.448039 systemd-journald[319]: Runtime Journal (/run/log/journal/9a9b9b1100c94f3eb6d8083c1eda44bb) is 6M, max 48.1M, 42.1M free. Nov 5 15:52:16.450348 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:52:16.453630 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:52:16.594136 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:52:16.616163 systemd-tmpfiles[330]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:52:16.652752 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:16.692572 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:52:16.741618 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:52:16.781542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:52:16.864477 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:52:16.892999 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:52:16.893356 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:52:16.917409 kernel: Bridge firewalling registered Nov 5 15:52:16.917481 systemd-modules-load[320]: Inserted module 'br_netfilter' Nov 5 15:52:16.922482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:52:16.924753 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:52:16.944547 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:52:17.032963 dracut-cmdline[352]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c2a05564bcb92d35bbb2f0ae32fe5ddfa8424368122998dedda8bd375a237cb4 Nov 5 15:52:17.053300 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:52:17.075604 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:52:17.322579 systemd-resolved[373]: Positive Trust Anchors: Nov 5 15:52:17.323597 systemd-resolved[373]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:52:17.326671 systemd-resolved[373]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:52:17.326719 systemd-resolved[373]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:52:17.519003 systemd-resolved[373]: Defaulting to hostname 'linux'. Nov 5 15:52:17.523144 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:52:17.529145 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:52:17.699562 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:52:17.776284 kernel: iscsi: registered transport (tcp) Nov 5 15:52:17.854341 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:52:17.854452 kernel: QLogic iSCSI HBA Driver Nov 5 15:52:17.996812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:52:18.072257 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:52:18.084080 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:52:18.279889 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:52:18.307043 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:52:18.345639 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:52:18.446095 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:52:18.457240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:52:18.585614 systemd-udevd[589]: Using default interface naming scheme 'v257'. Nov 5 15:52:18.673473 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:52:18.703940 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:52:18.983565 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:52:19.014549 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:52:19.073013 dracut-pre-trigger[656]: rd.md=0: removing MD RAID activation Nov 5 15:52:19.178635 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:52:19.200912 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:52:19.221655 systemd-networkd[701]: lo: Link UP Nov 5 15:52:19.221672 systemd-networkd[701]: lo: Gained carrier Nov 5 15:52:19.226838 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:52:19.232981 systemd[1]: Reached target network.target - Network. Nov 5 15:52:19.431934 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:52:19.467093 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:52:19.858498 kernel: cryptd: max_cpu_qlen set to 1000 Nov 5 15:52:19.983063 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:52:19.985766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:19.994930 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:20.003929 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:20.052487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 15:52:20.115993 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 15:52:20.148102 kernel: AES CTR mode by8 optimization enabled Nov 5 15:52:20.153684 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:52:20.194098 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 15:52:20.268512 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:52:20.274277 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:52:20.274443 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:20.291527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:20.338517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:20.426511 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Nov 5 15:52:20.483423 disk-uuid[840]: Primary Header is updated. Nov 5 15:52:20.483423 disk-uuid[840]: Secondary Entries is updated. Nov 5 15:52:20.483423 disk-uuid[840]: Secondary Header is updated. Nov 5 15:52:20.503400 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:52:20.503425 systemd-networkd[701]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:52:20.513435 systemd-networkd[701]: eth0: Link UP Nov 5 15:52:20.526434 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:20.527244 systemd-networkd[701]: eth0: Gained carrier Nov 5 15:52:20.527265 systemd-networkd[701]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:52:20.600172 systemd-networkd[701]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:52:20.782745 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:52:20.787668 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:52:20.813389 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:52:20.820278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:52:20.844441 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:52:20.975186 systemd-resolved[373]: Detected conflict on linux IN A 10.0.0.94 Nov 5 15:52:20.975211 systemd-resolved[373]: Hostname conflict, changing published hostname from 'linux' to 'linux11'. Nov 5 15:52:20.976169 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:52:21.615072 disk-uuid[852]: Warning: The kernel is still using the old partition table. Nov 5 15:52:21.615072 disk-uuid[852]: The new table will be used at the next reboot or after you Nov 5 15:52:21.615072 disk-uuid[852]: run partprobe(8) or kpartx(8) Nov 5 15:52:21.615072 disk-uuid[852]: The operation has completed successfully. Nov 5 15:52:21.667552 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:52:21.681147 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:52:21.702564 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:52:21.854438 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (878) Nov 5 15:52:21.860027 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:52:21.860135 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:52:21.884791 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:52:21.884889 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:52:21.888572 systemd-networkd[701]: eth0: Gained IPv6LL Nov 5 15:52:21.923199 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:52:21.962044 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:52:21.977391 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:52:23.167667 ignition[897]: Ignition 2.22.0 Nov 5 15:52:23.167690 ignition[897]: Stage: fetch-offline Nov 5 15:52:23.167745 ignition[897]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:23.167762 ignition[897]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:23.167887 ignition[897]: parsed url from cmdline: "" Nov 5 15:52:23.167892 ignition[897]: no config URL provided Nov 5 15:52:23.167899 ignition[897]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:52:23.167915 ignition[897]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:52:23.167981 ignition[897]: op(1): [started] loading QEMU firmware config module Nov 5 15:52:23.167988 ignition[897]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 15:52:23.331224 ignition[897]: op(1): [finished] loading QEMU firmware config module Nov 5 15:52:23.451250 ignition[897]: parsing config with SHA512: d8fe7cdd9b6b3508323af87f2d693ab994fbfbe681071590e4123065e6bd7d6bacc60934990cd00a4a33ecd7d48e616e0df73c69dfc52aa7ff16a370f0dafbab Nov 5 15:52:23.464932 unknown[897]: fetched base config from "system" Nov 5 15:52:23.465004 unknown[897]: fetched user config from "qemu" Nov 5 15:52:23.472859 ignition[897]: fetch-offline: fetch-offline passed Nov 5 15:52:23.478214 ignition[897]: Ignition finished successfully Nov 5 15:52:23.499730 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:52:23.509113 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 15:52:23.511086 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:52:23.975973 ignition[910]: Ignition 2.22.0 Nov 5 15:52:23.975996 ignition[910]: Stage: kargs Nov 5 15:52:23.976242 ignition[910]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:23.976257 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:23.985741 ignition[910]: kargs: kargs passed Nov 5 15:52:23.985854 ignition[910]: Ignition finished successfully Nov 5 15:52:24.005493 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:52:24.028572 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:52:24.235217 ignition[918]: Ignition 2.22.0 Nov 5 15:52:24.236384 ignition[918]: Stage: disks Nov 5 15:52:24.240274 ignition[918]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:24.240299 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:24.242367 ignition[918]: disks: disks passed Nov 5 15:52:24.252771 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:52:24.242444 ignition[918]: Ignition finished successfully Nov 5 15:52:24.273745 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:52:24.283052 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:52:24.298438 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:52:24.304566 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:52:24.336264 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:52:24.344439 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:52:24.484106 systemd-fsck[928]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 15:52:24.505637 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:52:24.532204 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:52:24.937156 kernel: EXT4-fs (vda9): mounted filesystem f3db699e-c9e0-4f6b-8c2b-aa40a78cd116 r/w with ordered data mode. Quota mode: none. Nov 5 15:52:24.940705 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:52:24.944787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:52:24.964501 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:52:24.983191 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:52:24.995656 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:52:24.995723 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:52:24.995770 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:52:25.076709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:52:25.085493 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (936) Nov 5 15:52:25.094432 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:52:25.116843 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:52:25.116877 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:52:25.124824 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:52:25.125104 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:52:25.137240 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:52:25.400438 initrd-setup-root[960]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:52:25.434064 initrd-setup-root[967]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:52:25.452577 initrd-setup-root[974]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:52:25.479719 initrd-setup-root[981]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:52:25.890354 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:52:25.920034 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:52:25.930884 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:52:25.967211 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:52:25.976819 kernel: BTRFS info (device vda6): last unmount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:52:26.293941 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:52:26.314212 ignition[1049]: INFO : Ignition 2.22.0 Nov 5 15:52:26.314212 ignition[1049]: INFO : Stage: mount Nov 5 15:52:26.322877 ignition[1049]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:26.322877 ignition[1049]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:26.322877 ignition[1049]: INFO : mount: mount passed Nov 5 15:52:26.322877 ignition[1049]: INFO : Ignition finished successfully Nov 5 15:52:26.338938 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:52:26.350231 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:52:26.425401 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:52:26.467028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1062) Nov 5 15:52:26.476043 kernel: BTRFS info (device vda6): first mount of filesystem fa887730-d07b-4714-9f34-65e9489ec2e4 Nov 5 15:52:26.476134 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Nov 5 15:52:26.482420 kernel: BTRFS info (device vda6): turning on async discard Nov 5 15:52:26.487818 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 15:52:26.503255 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:52:26.645409 ignition[1079]: INFO : Ignition 2.22.0 Nov 5 15:52:26.645409 ignition[1079]: INFO : Stage: files Nov 5 15:52:26.645409 ignition[1079]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:26.645409 ignition[1079]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:26.645409 ignition[1079]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:52:26.692962 ignition[1079]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:52:26.692962 ignition[1079]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:52:26.715133 ignition[1079]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:52:26.736537 ignition[1079]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:52:26.736537 ignition[1079]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:52:26.736537 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:52:26.736537 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Nov 5 15:52:26.716983 unknown[1079]: wrote ssh authorized keys file for user: core Nov 5 15:52:26.953503 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:52:27.156686 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:52:27.254811 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Nov 5 15:52:28.855391 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:52:30.638026 kernel: hrtimer: interrupt took 4224152 ns Nov 5 15:52:31.661714 ignition[1079]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Nov 5 15:52:31.661714 ignition[1079]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:52:31.679738 ignition[1079]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Nov 5 15:52:31.827303 ignition[1079]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 15:52:31.958117 ignition[1079]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:52:31.981648 ignition[1079]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:52:31.993198 ignition[1079]: INFO : files: files passed Nov 5 15:52:31.993198 ignition[1079]: INFO : Ignition finished successfully Nov 5 15:52:31.992257 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:52:32.002842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:52:32.021115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:52:32.077685 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:52:32.078240 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:52:32.103703 initrd-setup-root-after-ignition[1110]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 15:52:32.125572 initrd-setup-root-after-ignition[1112]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:52:32.125572 initrd-setup-root-after-ignition[1112]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:52:32.142174 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:52:32.142799 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:52:32.157773 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:52:32.161992 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:52:32.342057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:52:32.344442 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:52:32.376774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:52:32.377071 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:52:32.379965 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:52:32.387398 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:52:32.484910 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:52:32.492854 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:52:32.574514 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:52:32.576521 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:52:32.587993 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:52:32.595038 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:52:32.597113 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:52:32.597393 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:52:32.606173 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:52:32.612159 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:52:32.617416 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:52:32.618990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:52:32.628581 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:52:32.650213 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:52:32.660220 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:52:32.664161 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:52:32.669889 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:52:32.672406 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:52:32.683149 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:52:32.694375 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:52:32.694657 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:52:32.712429 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:52:32.714738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:52:32.721838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:52:32.724897 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:52:32.735220 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:52:32.735494 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:52:32.740908 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:52:32.741120 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:52:32.745055 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:52:32.747541 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:52:32.753470 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:52:32.768237 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:52:32.780081 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:52:32.787968 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:52:32.788143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:52:32.794909 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:52:32.795067 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:52:32.805461 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:52:32.805687 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:52:32.842537 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:52:32.842755 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:52:32.848073 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:52:32.860173 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:52:32.892767 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:52:32.902508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:52:32.917053 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:52:32.928303 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:52:32.934811 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:52:32.935052 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:52:32.973713 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:52:32.983001 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:52:32.983240 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:52:33.014455 ignition[1136]: INFO : Ignition 2.22.0 Nov 5 15:52:33.014455 ignition[1136]: INFO : Stage: umount Nov 5 15:52:33.024621 ignition[1136]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:52:33.024621 ignition[1136]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 15:52:33.024621 ignition[1136]: INFO : umount: umount passed Nov 5 15:52:33.024621 ignition[1136]: INFO : Ignition finished successfully Nov 5 15:52:33.016329 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:52:33.016591 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:52:33.027192 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:52:33.027408 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:52:33.037537 systemd[1]: Stopped target network.target - Network. Nov 5 15:52:33.045561 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:52:33.047464 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:52:33.049564 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:52:33.049674 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:52:33.059776 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:52:33.059898 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:52:33.092770 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:52:33.096361 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:52:33.127858 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:52:33.127998 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:52:33.130669 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:52:33.146166 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:52:33.226652 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:52:33.228906 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:52:33.251632 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:52:33.251858 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:52:33.280288 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:52:33.280539 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:52:33.280643 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:52:33.312655 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:52:33.314622 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:52:33.314755 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:52:33.324417 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:52:33.324554 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:52:33.330975 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:52:33.331120 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:52:33.331280 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:52:33.398951 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:52:33.402228 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:52:33.431932 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:52:33.432022 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:52:33.449069 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:52:33.449164 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:52:33.457704 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:52:33.460191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:52:33.467363 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:52:33.467478 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:52:33.475692 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:52:33.475813 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:52:33.491299 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:52:33.498832 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:52:33.498995 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:52:33.516483 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:52:33.516616 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:52:33.525131 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:52:33.525262 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:52:33.536199 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:52:33.536365 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:52:33.540261 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:52:33.540390 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:33.567573 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:52:33.567767 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:52:33.637369 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:52:33.639350 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:52:33.670380 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:52:33.678044 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:52:33.724258 systemd[1]: Switching root. Nov 5 15:52:33.795202 systemd-journald[319]: Journal stopped Nov 5 15:52:37.929428 systemd-journald[319]: Received SIGTERM from PID 1 (systemd). Nov 5 15:52:37.929543 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:52:37.929568 kernel: SELinux: policy capability open_perms=1 Nov 5 15:52:37.929586 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:52:37.929614 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:52:37.929637 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:52:37.929676 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:52:37.929711 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:52:37.929730 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:52:37.929752 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:52:37.929770 kernel: audit: type=1403 audit(1762357955.279:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:52:37.929799 systemd[1]: Successfully loaded SELinux policy in 177.677ms. Nov 5 15:52:37.929840 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 54.698ms. Nov 5 15:52:37.929864 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:52:37.929885 systemd[1]: Detected virtualization kvm. Nov 5 15:52:37.929905 systemd[1]: Detected architecture x86-64. Nov 5 15:52:37.929923 systemd[1]: Detected first boot. Nov 5 15:52:37.929941 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:52:37.929959 zram_generator::config[1182]: No configuration found. Nov 5 15:52:37.929986 kernel: Guest personality initialized and is inactive Nov 5 15:52:37.930005 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Nov 5 15:52:37.930023 kernel: Initialized host personality Nov 5 15:52:37.930042 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:52:37.930061 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:52:37.930081 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:52:37.930109 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:52:37.930133 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:52:37.930154 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:52:37.931231 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:52:37.931257 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:52:37.931278 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:52:37.931299 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:52:37.931335 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:52:37.931364 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:52:37.931384 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:52:37.931404 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:52:37.931424 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:52:37.931443 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:52:37.931461 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:52:37.931481 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:52:37.931504 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:52:37.931523 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:52:37.931542 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:52:37.931559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:52:37.931578 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:52:37.931598 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:52:37.931623 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:52:37.931644 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:52:37.931938 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:52:37.932026 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:52:37.932050 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:52:37.932071 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:52:37.932092 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:52:37.932118 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:52:37.932139 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:52:37.932166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:52:37.932187 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:52:37.932208 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:52:37.932255 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:52:37.932280 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:52:37.932307 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:52:37.932370 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:52:37.932395 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:37.932413 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:52:37.932432 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:52:37.932451 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:52:37.932470 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:52:37.932495 systemd[1]: Reached target machines.target - Containers. Nov 5 15:52:37.932514 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:52:37.932533 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:52:37.932565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:52:37.932584 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:52:37.932603 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:52:37.932622 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:52:37.932646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:52:37.932793 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:52:37.932812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:52:37.932831 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:52:37.932937 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:52:37.932959 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:52:37.932982 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:52:37.933000 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:52:37.933018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:52:37.933044 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:52:37.933223 kernel: fuse: init (API version 7.41) Nov 5 15:52:37.933248 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:52:37.933266 kernel: ACPI: bus type drm_connector registered Nov 5 15:52:37.933289 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:52:37.933308 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:52:37.933349 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:52:37.933368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:52:37.933405 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:37.933486 systemd-journald[1267]: Collecting audit messages is disabled. Nov 5 15:52:37.933535 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:52:37.933556 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:52:37.933575 systemd-journald[1267]: Journal started Nov 5 15:52:37.933613 systemd-journald[1267]: Runtime Journal (/run/log/journal/9a9b9b1100c94f3eb6d8083c1eda44bb) is 6M, max 48.1M, 42.1M free. Nov 5 15:52:36.975295 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:52:37.000949 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 15:52:37.001833 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:52:37.005050 systemd[1]: systemd-journald.service: Consumed 1.208s CPU time. Nov 5 15:52:37.957395 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:52:37.963489 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:52:37.973774 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:52:37.978992 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:52:37.984127 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:52:37.990810 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:52:37.996474 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:52:38.004188 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:52:38.006052 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:52:38.017907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:52:38.018238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:52:38.026045 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:52:38.026409 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:52:38.034623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:52:38.036386 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:52:38.045173 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:52:38.046218 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:52:38.058635 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:52:38.059617 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:52:38.071118 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:52:38.079182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:52:38.098923 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:52:38.105537 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:52:38.114613 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:52:38.146185 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:52:38.152760 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:52:38.169039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:52:38.187451 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:52:38.190185 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:52:38.190246 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:52:38.200547 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:52:38.212976 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:52:38.223147 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:52:38.232934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:52:38.235199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:52:38.239926 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:52:38.240300 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:52:38.262330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:52:38.275545 systemd-journald[1267]: Time spent on flushing to /var/log/journal/9a9b9b1100c94f3eb6d8083c1eda44bb is 35.949ms for 1058 entries. Nov 5 15:52:38.275545 systemd-journald[1267]: System Journal (/var/log/journal/9a9b9b1100c94f3eb6d8083c1eda44bb) is 8M, max 163.5M, 155.5M free. Nov 5 15:52:38.379487 systemd-journald[1267]: Received client request to flush runtime journal. Nov 5 15:52:38.379540 kernel: loop1: detected capacity change from 0 to 219144 Nov 5 15:52:38.275553 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:52:38.287108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:52:38.309819 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:52:38.328682 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:52:38.337747 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:52:38.345897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:52:38.364608 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:52:38.383539 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:52:38.394199 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:52:38.442892 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 5 15:52:38.444085 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Nov 5 15:52:38.473629 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:52:38.482684 kernel: loop2: detected capacity change from 0 to 110984 Nov 5 15:52:38.483903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:52:38.498583 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:52:38.636810 kernel: loop3: detected capacity change from 0 to 128048 Nov 5 15:52:38.638962 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:52:38.655726 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:52:38.669942 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:52:38.719075 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:52:38.753489 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 5 15:52:38.760056 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Nov 5 15:52:38.856951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:52:38.907386 kernel: loop4: detected capacity change from 0 to 219144 Nov 5 15:52:38.973208 kernel: loop5: detected capacity change from 0 to 110984 Nov 5 15:52:38.980066 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:52:39.098833 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:52:39.316767 kernel: loop6: detected capacity change from 0 to 128048 Nov 5 15:52:39.371159 (sd-merge)[1328]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 15:52:39.390534 (sd-merge)[1328]: Merged extensions into '/usr'. Nov 5 15:52:39.408376 systemd[1]: Reload requested from client PID 1302 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:52:39.408414 systemd[1]: Reloading... Nov 5 15:52:39.658935 systemd-resolved[1322]: Positive Trust Anchors: Nov 5 15:52:39.659561 systemd-resolved[1322]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:52:39.659570 systemd-resolved[1322]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:52:39.659619 systemd-resolved[1322]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:52:39.753292 systemd-resolved[1322]: Defaulting to hostname 'linux'. Nov 5 15:52:39.792385 zram_generator::config[1371]: No configuration found. Nov 5 15:52:40.366817 systemd[1]: Reloading finished in 957 ms. Nov 5 15:52:40.425901 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:52:40.447301 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:52:40.455868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:52:40.487897 systemd[1]: Starting ensure-sysext.service... Nov 5 15:52:40.502647 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:52:40.551707 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:52:40.570053 systemd[1]: Reload requested from client PID 1398 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:52:40.570077 systemd[1]: Reloading... Nov 5 15:52:40.653227 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:52:40.653341 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:52:40.653962 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:52:40.654385 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:52:40.660004 systemd-tmpfiles[1399]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:52:40.660481 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Nov 5 15:52:40.662693 systemd-tmpfiles[1399]: ACLs are not supported, ignoring. Nov 5 15:52:40.688039 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:52:40.688064 systemd-tmpfiles[1399]: Skipping /boot Nov 5 15:52:40.713117 systemd-tmpfiles[1399]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:52:40.713146 systemd-tmpfiles[1399]: Skipping /boot Nov 5 15:52:40.723365 zram_generator::config[1432]: No configuration found. Nov 5 15:52:41.211868 systemd[1]: Reloading finished in 640 ms. Nov 5 15:52:41.316765 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:52:41.342685 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:52:41.357118 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:52:41.376538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:52:41.406548 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:52:41.441419 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:52:41.451109 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:52:41.466374 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.466666 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:52:41.482886 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:52:41.498739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:52:41.515616 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:52:41.525067 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:52:41.525279 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:52:41.525432 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.549636 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.549913 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:52:41.550190 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:52:41.553752 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:52:41.553980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.561424 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:52:41.572253 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:52:41.573792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:52:41.579529 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:52:41.581898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:52:41.590764 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:52:41.591169 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:52:41.602888 systemd-udevd[1472]: Using default interface naming scheme 'v257'. Nov 5 15:52:41.632684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.633029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:52:41.642473 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:52:41.661233 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:52:41.678013 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:52:41.707032 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:52:41.711703 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:52:41.711939 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:52:41.712150 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 5 15:52:41.714538 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:52:41.741810 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:52:41.743694 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:52:41.752565 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:52:41.752909 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:52:41.763576 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:52:41.763906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:52:41.770606 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:52:41.773169 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:52:41.820015 systemd[1]: Finished ensure-sysext.service. Nov 5 15:52:41.842518 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:52:41.843523 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:52:41.862533 augenrules[1510]: No rules Nov 5 15:52:41.862592 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 15:52:41.873468 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:52:41.873948 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:52:41.931051 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:52:41.967761 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:52:42.319892 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:52:42.413426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:52:42.417614 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:52:42.688784 kernel: mousedev: PS/2 mouse device common for all mice Nov 5 15:52:42.705562 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 15:52:42.712931 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:52:42.752670 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 15:52:43.021733 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:52:43.038122 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Nov 5 15:52:43.451268 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:52:43.502129 systemd-networkd[1529]: lo: Link UP Nov 5 15:52:43.502147 systemd-networkd[1529]: lo: Gained carrier Nov 5 15:52:43.509810 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:52:43.774001 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:52:43.774009 systemd-networkd[1529]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:52:43.780045 systemd-networkd[1529]: eth0: Link UP Nov 5 15:52:43.781062 systemd-networkd[1529]: eth0: Gained carrier Nov 5 15:52:43.782016 systemd-networkd[1529]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:52:43.782757 systemd[1]: Reached target network.target - Network. Nov 5 15:52:43.791794 kernel: ACPI: button: Power Button [PWRF] Nov 5 15:52:43.800748 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:52:43.808625 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:52:43.854748 systemd-networkd[1529]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 15:52:43.860240 systemd-timesyncd[1511]: Network configuration changed, trying to establish connection. Nov 5 15:52:44.898077 systemd-timesyncd[1511]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 15:52:44.898210 systemd-timesyncd[1511]: Initial clock synchronization to Wed 2025-11-05 15:52:44.897939 UTC. Nov 5 15:52:44.899416 systemd-resolved[1322]: Clock change detected. Flushing caches. Nov 5 15:52:44.925503 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:44.990272 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:52:45.271363 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:52:45.272146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:45.299243 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Nov 5 15:52:45.305318 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Nov 5 15:52:45.308746 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Nov 5 15:52:45.305225 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:52:45.759326 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:52:46.727239 systemd-networkd[1529]: eth0: Gained IPv6LL Nov 5 15:52:46.739896 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:52:46.744703 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:52:46.816743 ldconfig[1469]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:52:46.886104 kernel: kvm_amd: TSC scaling supported Nov 5 15:52:46.886245 kernel: kvm_amd: Nested Virtualization enabled Nov 5 15:52:46.886328 kernel: kvm_amd: Nested Paging enabled Nov 5 15:52:46.887212 kernel: kvm_amd: LBR virtualization supported Nov 5 15:52:46.888304 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Nov 5 15:52:46.889395 kernel: kvm_amd: Virtual GIF supported Nov 5 15:52:46.944775 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:52:46.963525 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:52:47.038396 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:52:47.050263 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:52:47.052704 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:52:47.060095 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:52:47.065560 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Nov 5 15:52:47.069265 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:52:47.080627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:52:47.086316 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:52:47.088810 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:52:47.088867 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:52:47.093217 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:52:47.098110 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:52:47.107040 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:52:47.118613 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:52:47.122854 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:52:47.136131 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:52:47.150344 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:52:47.153625 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:52:47.161650 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:52:47.170513 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:52:47.173625 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:52:47.183354 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:52:47.183415 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:52:47.189564 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:52:47.204226 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 15:52:47.296961 kernel: EDAC MC: Ver: 3.0.0 Nov 5 15:52:47.307158 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:52:47.313234 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:52:47.337503 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:52:47.350306 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:52:47.352526 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:52:47.370236 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Nov 5 15:52:47.381113 jq[1593]: false Nov 5 15:52:47.395228 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:52:47.414109 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:52:47.437639 extend-filesystems[1594]: Found /dev/vda6 Nov 5 15:52:47.451311 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:52:47.453279 extend-filesystems[1594]: Found /dev/vda9 Nov 5 15:52:47.463030 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing passwd entry cache Nov 5 15:52:47.463390 extend-filesystems[1594]: Checking size of /dev/vda9 Nov 5 15:52:47.462027 oslogin_cache_refresh[1595]: Refreshing passwd entry cache Nov 5 15:52:47.463121 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:52:47.469048 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:52:47.486218 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting users, quitting Nov 5 15:52:47.486218 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:52:47.486218 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Refreshing group entry cache Nov 5 15:52:47.484359 oslogin_cache_refresh[1595]: Failure getting users, quitting Nov 5 15:52:47.484394 oslogin_cache_refresh[1595]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Nov 5 15:52:47.484491 oslogin_cache_refresh[1595]: Refreshing group entry cache Nov 5 15:52:47.494684 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:52:47.505495 extend-filesystems[1594]: Resized partition /dev/vda9 Nov 5 15:52:47.516204 extend-filesystems[1616]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:52:47.512316 oslogin_cache_refresh[1595]: Failure getting groups, quitting Nov 5 15:52:47.521394 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Failure getting groups, quitting Nov 5 15:52:47.521394 google_oslogin_nss_cache[1595]: oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:52:47.512341 oslogin_cache_refresh[1595]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Nov 5 15:52:47.525240 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:52:47.538339 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:52:47.539589 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:52:47.541162 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:52:47.556694 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 15:52:47.561544 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:52:47.578618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:52:47.583874 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:52:47.584342 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:52:47.584837 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Nov 5 15:52:47.586389 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Nov 5 15:52:47.605636 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:52:47.626457 update_engine[1628]: I20251105 15:52:47.626194 1628 main.cc:92] Flatcar Update Engine starting Nov 5 15:52:47.627597 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:52:47.702851 jq[1629]: true Nov 5 15:52:47.636388 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:52:47.645158 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:52:47.649058 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:52:47.736516 tar[1638]: linux-amd64/LICENSE Nov 5 15:52:47.736516 tar[1638]: linux-amd64/helm Nov 5 15:52:47.704222 (ntainerd)[1641]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:52:47.747569 jq[1639]: true Nov 5 15:52:47.774785 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 15:52:47.776913 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 15:52:47.779947 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 15:52:47.810165 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:52:47.919988 extend-filesystems[1616]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 15:52:47.919988 extend-filesystems[1616]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 15:52:47.919988 extend-filesystems[1616]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 15:52:47.991423 extend-filesystems[1594]: Resized filesystem in /dev/vda9 Nov 5 15:52:47.998162 update_engine[1628]: I20251105 15:52:47.941738 1628 update_check_scheduler.cc:74] Next update check in 3m4s Nov 5 15:52:47.926710 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:52:47.937321 dbus-daemon[1591]: [system] SELinux support is enabled Nov 5 15:52:47.928849 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:52:47.975015 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:52:47.986493 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:52:47.986528 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:52:48.000625 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:52:48.000719 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:52:48.013808 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:52:48.019940 bash[1674]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:52:48.021888 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:52:48.027636 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:52:48.052737 systemd-logind[1620]: Watching system buttons on /dev/input/event2 (Power Button) Nov 5 15:52:48.052777 systemd-logind[1620]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 5 15:52:48.053172 systemd-logind[1620]: New seat seat0. Nov 5 15:52:48.215630 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:52:48.219804 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 15:52:48.490337 locksmithd[1678]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:52:49.123024 sshd_keygen[1627]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:52:49.187975 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:52:49.199263 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:52:49.760910 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:52:49.761302 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:52:49.785275 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:52:49.836654 containerd[1641]: time="2025-11-05T15:52:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:52:49.837945 containerd[1641]: time="2025-11-05T15:52:49.837804526Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:52:49.908259 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.910564828Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.643µs" Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.911990441Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912028493Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912348733Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912371286Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912412533Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912505307Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912523090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:52:49.913077 containerd[1641]: time="2025-11-05T15:52:49.912895649Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:52:49.918012 containerd[1641]: time="2025-11-05T15:52:49.912916258Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:52:49.918012 containerd[1641]: time="2025-11-05T15:52:49.916010661Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:52:49.918012 containerd[1641]: time="2025-11-05T15:52:49.916108073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:52:49.918012 containerd[1641]: time="2025-11-05T15:52:49.916328737Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:52:49.925195 containerd[1641]: time="2025-11-05T15:52:49.925126957Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:52:49.925435 containerd[1641]: time="2025-11-05T15:52:49.925408766Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:52:49.925512 containerd[1641]: time="2025-11-05T15:52:49.925492993Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:52:49.925633 containerd[1641]: time="2025-11-05T15:52:49.925612427Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:52:49.928953 containerd[1641]: time="2025-11-05T15:52:49.928649122Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:52:49.933045 containerd[1641]: time="2025-11-05T15:52:49.929727324Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:52:49.960180 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973608397Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973721339Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973743881Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973760963Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973780931Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973795167Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973811648Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973827328Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973842596Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973855811Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973869417Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:52:49.973977 containerd[1641]: time="2025-11-05T15:52:49.973886639Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:52:49.975017 containerd[1641]: time="2025-11-05T15:52:49.974892185Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:52:49.975017 containerd[1641]: time="2025-11-05T15:52:49.974984578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:52:49.975140 containerd[1641]: time="2025-11-05T15:52:49.975118960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:52:49.975291 containerd[1641]: time="2025-11-05T15:52:49.975203328Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:52:49.975291 containerd[1641]: time="2025-11-05T15:52:49.975221843Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:52:49.975291 containerd[1641]: time="2025-11-05T15:52:49.975237693Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:52:49.975445 containerd[1641]: time="2025-11-05T15:52:49.975420996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:52:49.975609 containerd[1641]: time="2025-11-05T15:52:49.975521004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:52:49.975742 containerd[1641]: time="2025-11-05T15:52:49.975555599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:52:49.975742 containerd[1641]: time="2025-11-05T15:52:49.975685442Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:52:49.975742 containerd[1641]: time="2025-11-05T15:52:49.975708375Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:52:49.976107 containerd[1641]: time="2025-11-05T15:52:49.976001675Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:52:49.976107 containerd[1641]: time="2025-11-05T15:52:49.976053002Z" level=info msg="Start snapshots syncer" Nov 5 15:52:49.976246 containerd[1641]: time="2025-11-05T15:52:49.976221808Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:52:49.976895 containerd[1641]: time="2025-11-05T15:52:49.976836260Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:52:49.977238 containerd[1641]: time="2025-11-05T15:52:49.977106167Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:52:49.977490 containerd[1641]: time="2025-11-05T15:52:49.977416989Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:52:49.977713 containerd[1641]: time="2025-11-05T15:52:49.977674993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:52:49.977833 containerd[1641]: time="2025-11-05T15:52:49.977813804Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:52:49.978016 containerd[1641]: time="2025-11-05T15:52:49.977915695Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:52:49.978016 containerd[1641]: time="2025-11-05T15:52:49.977975828Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:52:49.978016 containerd[1641]: time="2025-11-05T15:52:49.977993210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:52:49.978207 containerd[1641]: time="2025-11-05T15:52:49.978143813Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:52:49.978207 containerd[1641]: time="2025-11-05T15:52:49.978166295Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:52:49.978353 containerd[1641]: time="2025-11-05T15:52:49.978283895Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:52:49.978353 containerd[1641]: time="2025-11-05T15:52:49.978306578Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:52:49.978353 containerd[1641]: time="2025-11-05T15:52:49.978321836Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:52:49.978536 containerd[1641]: time="2025-11-05T15:52:49.978499369Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:52:49.978723 containerd[1641]: time="2025-11-05T15:52:49.978657927Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:52:49.978723 containerd[1641]: time="2025-11-05T15:52:49.978678225Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:52:49.978723 containerd[1641]: time="2025-11-05T15:52:49.978693213Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:52:49.978901 containerd[1641]: time="2025-11-05T15:52:49.978704714Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:52:49.978901 containerd[1641]: time="2025-11-05T15:52:49.978841090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:52:49.978901 containerd[1641]: time="2025-11-05T15:52:49.978857671Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:52:49.979138 containerd[1641]: time="2025-11-05T15:52:49.979064920Z" level=info msg="runtime interface created" Nov 5 15:52:49.979138 containerd[1641]: time="2025-11-05T15:52:49.979080750Z" level=info msg="created NRI interface" Nov 5 15:52:49.979138 containerd[1641]: time="2025-11-05T15:52:49.979093574Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:52:49.979319 containerd[1641]: time="2025-11-05T15:52:49.979117959Z" level=info msg="Connect containerd service" Nov 5 15:52:49.979319 containerd[1641]: time="2025-11-05T15:52:49.979290293Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:52:49.980932 containerd[1641]: time="2025-11-05T15:52:49.980844467Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:52:49.999385 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:52:50.004269 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:52:50.762107 containerd[1641]: time="2025-11-05T15:52:50.761982300Z" level=info msg="Start subscribing containerd event" Nov 5 15:52:50.762107 containerd[1641]: time="2025-11-05T15:52:50.762074312Z" level=info msg="Start recovering state" Nov 5 15:52:50.762298 containerd[1641]: time="2025-11-05T15:52:50.762239322Z" level=info msg="Start event monitor" Nov 5 15:52:50.762298 containerd[1641]: time="2025-11-05T15:52:50.762260772Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:52:50.762298 containerd[1641]: time="2025-11-05T15:52:50.762272464Z" level=info msg="Start streaming server" Nov 5 15:52:50.762298 containerd[1641]: time="2025-11-05T15:52:50.762286239Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:52:50.762298 containerd[1641]: time="2025-11-05T15:52:50.762295447Z" level=info msg="runtime interface starting up..." Nov 5 15:52:50.762454 containerd[1641]: time="2025-11-05T15:52:50.762304193Z" level=info msg="starting plugins..." Nov 5 15:52:50.762454 containerd[1641]: time="2025-11-05T15:52:50.762326285Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:52:50.762634 containerd[1641]: time="2025-11-05T15:52:50.762578067Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:52:50.762999 containerd[1641]: time="2025-11-05T15:52:50.762981153Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:52:50.765333 containerd[1641]: time="2025-11-05T15:52:50.765252462Z" level=info msg="containerd successfully booted in 0.930983s" Nov 5 15:52:50.767159 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:52:50.848973 tar[1638]: linux-amd64/README.md Nov 5 15:52:50.906487 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:52:53.182703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:52:53.187885 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:52:53.196128 systemd[1]: Startup finished in 7.040s (kernel) + 20.837s (initrd) + 17.054s (userspace) = 44.933s. Nov 5 15:52:53.208237 (kubelet)[1732]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:52:55.011239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:52:55.014164 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:42662.service - OpenSSH per-connection server daemon (10.0.0.1:42662). Nov 5 15:52:55.837801 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 42662 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:55.848971 sshd-session[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:55.902018 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:52:55.913349 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:52:56.010050 systemd-logind[1620]: New session 1 of user core. Nov 5 15:52:56.133535 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:52:56.148946 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:52:56.592420 (systemd)[1749]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:52:56.606818 systemd-logind[1620]: New session c1 of user core. Nov 5 15:52:56.827209 kubelet[1732]: E1105 15:52:56.827076 1732 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:52:56.844091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:52:56.844367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:52:56.845256 systemd[1]: kubelet.service: Consumed 5.538s CPU time, 259.1M memory peak. Nov 5 15:52:57.160110 systemd[1749]: Queued start job for default target default.target. Nov 5 15:52:57.181846 systemd[1749]: Created slice app.slice - User Application Slice. Nov 5 15:52:57.181895 systemd[1749]: Reached target paths.target - Paths. Nov 5 15:52:57.181987 systemd[1749]: Reached target timers.target - Timers. Nov 5 15:52:57.190955 systemd[1749]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:52:57.224697 systemd[1749]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:52:57.225254 systemd[1749]: Reached target sockets.target - Sockets. Nov 5 15:52:57.225534 systemd[1749]: Reached target basic.target - Basic System. Nov 5 15:52:57.226762 systemd[1749]: Reached target default.target - Main User Target. Nov 5 15:52:57.226837 systemd[1749]: Startup finished in 601ms. Nov 5 15:52:57.227075 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:52:57.250489 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:52:57.346728 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:42670.service - OpenSSH per-connection server daemon (10.0.0.1:42670). Nov 5 15:52:57.474904 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 42670 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:57.473893 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:57.499883 systemd-logind[1620]: New session 2 of user core. Nov 5 15:52:57.506320 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:52:57.584878 sshd[1765]: Connection closed by 10.0.0.1 port 42670 Nov 5 15:52:57.586939 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:57.603193 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:42670.service: Deactivated successfully. Nov 5 15:52:57.606025 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:52:57.608406 systemd-logind[1620]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:52:57.613403 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:42678.service - OpenSSH per-connection server daemon (10.0.0.1:42678). Nov 5 15:52:57.616844 systemd-logind[1620]: Removed session 2. Nov 5 15:52:57.810886 sshd[1771]: Accepted publickey for core from 10.0.0.1 port 42678 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:57.810819 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:57.831615 systemd-logind[1620]: New session 3 of user core. Nov 5 15:52:57.840766 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:52:57.908169 sshd[1774]: Connection closed by 10.0.0.1 port 42678 Nov 5 15:52:57.909005 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:57.927745 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:42678.service: Deactivated successfully. Nov 5 15:52:57.930626 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:52:57.932997 systemd-logind[1620]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:52:57.937903 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:42682.service - OpenSSH per-connection server daemon (10.0.0.1:42682). Nov 5 15:52:57.939652 systemd-logind[1620]: Removed session 3. Nov 5 15:52:58.067081 sshd[1780]: Accepted publickey for core from 10.0.0.1 port 42682 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:58.070131 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:58.084506 systemd-logind[1620]: New session 4 of user core. Nov 5 15:52:58.100461 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:52:58.190801 sshd[1783]: Connection closed by 10.0.0.1 port 42682 Nov 5 15:52:58.189125 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:58.206585 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:42682.service: Deactivated successfully. Nov 5 15:52:58.212047 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:52:58.219361 systemd-logind[1620]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:52:58.224487 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:42686.service - OpenSSH per-connection server daemon (10.0.0.1:42686). Nov 5 15:52:58.234110 systemd-logind[1620]: Removed session 4. Nov 5 15:52:58.346523 sshd[1789]: Accepted publickey for core from 10.0.0.1 port 42686 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:58.346240 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:58.373029 systemd-logind[1620]: New session 5 of user core. Nov 5 15:52:58.395567 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:52:58.502568 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:52:58.505250 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:58.541499 sudo[1793]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:58.547348 sshd[1792]: Connection closed by 10.0.0.1 port 42686 Nov 5 15:52:58.548536 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:58.563155 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:42686.service: Deactivated successfully. Nov 5 15:52:58.566434 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:52:58.570165 systemd-logind[1620]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:52:58.576972 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:42702.service - OpenSSH per-connection server daemon (10.0.0.1:42702). Nov 5 15:52:58.577679 systemd-logind[1620]: Removed session 5. Nov 5 15:52:58.677770 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 42702 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:58.681437 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:58.701289 systemd-logind[1620]: New session 6 of user core. Nov 5 15:52:58.728713 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:52:58.797103 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:52:58.797534 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:58.872643 sudo[1804]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:58.885539 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:52:58.886043 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:52:58.914163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:52:59.017232 augenrules[1826]: No rules Nov 5 15:52:59.022819 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:52:59.023247 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:52:59.024881 sudo[1803]: pam_unix(sudo:session): session closed for user root Nov 5 15:52:59.027404 sshd[1802]: Connection closed by 10.0.0.1 port 42702 Nov 5 15:52:59.028264 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Nov 5 15:52:59.039115 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:42702.service: Deactivated successfully. Nov 5 15:52:59.042468 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:52:59.043578 systemd-logind[1620]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:52:59.047831 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:42718.service - OpenSSH per-connection server daemon (10.0.0.1:42718). Nov 5 15:52:59.049332 systemd-logind[1620]: Removed session 6. Nov 5 15:52:59.142730 sshd[1835]: Accepted publickey for core from 10.0.0.1 port 42718 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:52:59.143573 sshd-session[1835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:52:59.157543 systemd-logind[1620]: New session 7 of user core. Nov 5 15:52:59.169211 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:52:59.252977 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:52:59.254814 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:53:01.753980 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:53:01.891136 (dockerd)[1859]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:53:03.093397 dockerd[1859]: time="2025-11-05T15:53:03.093300763Z" level=info msg="Starting up" Nov 5 15:53:03.096000 dockerd[1859]: time="2025-11-05T15:53:03.095731251Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:53:03.233747 dockerd[1859]: time="2025-11-05T15:53:03.233647614Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:53:03.458749 dockerd[1859]: time="2025-11-05T15:53:03.458340593Z" level=info msg="Loading containers: start." Nov 5 15:53:03.491523 kernel: Initializing XFRM netlink socket Nov 5 15:53:04.175008 systemd-networkd[1529]: docker0: Link UP Nov 5 15:53:04.182412 dockerd[1859]: time="2025-11-05T15:53:04.182338137Z" level=info msg="Loading containers: done." Nov 5 15:53:04.373616 dockerd[1859]: time="2025-11-05T15:53:04.373522887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:53:04.373871 dockerd[1859]: time="2025-11-05T15:53:04.373669522Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:53:04.373871 dockerd[1859]: time="2025-11-05T15:53:04.373803253Z" level=info msg="Initializing buildkit" Nov 5 15:53:04.427090 dockerd[1859]: time="2025-11-05T15:53:04.426693110Z" level=info msg="Completed buildkit initialization" Nov 5 15:53:04.437147 dockerd[1859]: time="2025-11-05T15:53:04.437025166Z" level=info msg="Daemon has completed initialization" Nov 5 15:53:04.437461 dockerd[1859]: time="2025-11-05T15:53:04.437339175Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:53:04.437823 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:53:05.226296 containerd[1641]: time="2025-11-05T15:53:05.226240627Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 5 15:53:06.593886 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022024248.mount: Deactivated successfully. Nov 5 15:53:07.089884 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:53:07.092615 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:07.372761 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:07.386468 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:53:07.590433 kubelet[2139]: E1105 15:53:07.590329 2139 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:53:07.598691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:53:07.598953 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:53:07.599430 systemd[1]: kubelet.service: Consumed 325ms CPU time, 110.9M memory peak. Nov 5 15:53:08.274131 containerd[1641]: time="2025-11-05T15:53:08.274036953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:08.274952 containerd[1641]: time="2025-11-05T15:53:08.274888510Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=27065392" Nov 5 15:53:08.276229 containerd[1641]: time="2025-11-05T15:53:08.276189159Z" level=info msg="ImageCreate event name:\"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:08.280377 containerd[1641]: time="2025-11-05T15:53:08.280333561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:08.281855 containerd[1641]: time="2025-11-05T15:53:08.281782969Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"27061991\" in 3.055486638s" Nov 5 15:53:08.281855 containerd[1641]: time="2025-11-05T15:53:08.281862769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97\"" Nov 5 15:53:08.282660 containerd[1641]: time="2025-11-05T15:53:08.282623045Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 5 15:53:09.875776 containerd[1641]: time="2025-11-05T15:53:09.875719800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:09.876559 containerd[1641]: time="2025-11-05T15:53:09.876521664Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=21159757" Nov 5 15:53:09.877885 containerd[1641]: time="2025-11-05T15:53:09.877837151Z" level=info msg="ImageCreate event name:\"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:09.880702 containerd[1641]: time="2025-11-05T15:53:09.880652601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:09.881908 containerd[1641]: time="2025-11-05T15:53:09.881874112Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"22820214\" in 1.599216061s" Nov 5 15:53:09.881972 containerd[1641]: time="2025-11-05T15:53:09.881908055Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f\"" Nov 5 15:53:09.882548 containerd[1641]: time="2025-11-05T15:53:09.882326290Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 5 15:53:12.049402 containerd[1641]: time="2025-11-05T15:53:12.049336898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:12.050149 containerd[1641]: time="2025-11-05T15:53:12.050120918Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=15725093" Nov 5 15:53:12.051591 containerd[1641]: time="2025-11-05T15:53:12.051547443Z" level=info msg="ImageCreate event name:\"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:12.054637 containerd[1641]: time="2025-11-05T15:53:12.054569641Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:12.055703 containerd[1641]: time="2025-11-05T15:53:12.055641431Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"17385568\" in 2.173288s" Nov 5 15:53:12.055703 containerd[1641]: time="2025-11-05T15:53:12.055696745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813\"" Nov 5 15:53:12.056310 containerd[1641]: time="2025-11-05T15:53:12.056279307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 5 15:53:13.369582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1659255258.mount: Deactivated successfully. Nov 5 15:53:13.843384 containerd[1641]: time="2025-11-05T15:53:13.843301131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:13.843947 containerd[1641]: time="2025-11-05T15:53:13.843839701Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=25964699" Nov 5 15:53:13.844993 containerd[1641]: time="2025-11-05T15:53:13.844953590Z" level=info msg="ImageCreate event name:\"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:13.847256 containerd[1641]: time="2025-11-05T15:53:13.847209190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:13.847640 containerd[1641]: time="2025-11-05T15:53:13.847601917Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"25963718\" in 1.791288556s" Nov 5 15:53:13.847640 containerd[1641]: time="2025-11-05T15:53:13.847632694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7\"" Nov 5 15:53:13.848276 containerd[1641]: time="2025-11-05T15:53:13.848057591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 5 15:53:14.350547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3191693590.mount: Deactivated successfully. Nov 5 15:53:16.142667 containerd[1641]: time="2025-11-05T15:53:16.142579930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.143835 containerd[1641]: time="2025-11-05T15:53:16.143789658Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Nov 5 15:53:16.145275 containerd[1641]: time="2025-11-05T15:53:16.145194483Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.148726 containerd[1641]: time="2025-11-05T15:53:16.148670642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.150426 containerd[1641]: time="2025-11-05T15:53:16.150368296Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.30228156s" Nov 5 15:53:16.150426 containerd[1641]: time="2025-11-05T15:53:16.150410324Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Nov 5 15:53:16.151111 containerd[1641]: time="2025-11-05T15:53:16.151069531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 5 15:53:16.825563 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3487459919.mount: Deactivated successfully. Nov 5 15:53:16.833510 containerd[1641]: time="2025-11-05T15:53:16.833399243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.834399 containerd[1641]: time="2025-11-05T15:53:16.834303959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Nov 5 15:53:16.835779 containerd[1641]: time="2025-11-05T15:53:16.835727890Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.838390 containerd[1641]: time="2025-11-05T15:53:16.838336141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:16.839222 containerd[1641]: time="2025-11-05T15:53:16.839145298Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 688.024081ms" Nov 5 15:53:16.839222 containerd[1641]: time="2025-11-05T15:53:16.839212434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Nov 5 15:53:16.839829 containerd[1641]: time="2025-11-05T15:53:16.839692775Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 5 15:53:17.839854 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:53:17.842192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:18.151554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:18.173604 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:53:18.232321 kubelet[2272]: E1105 15:53:18.232242 2272 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:53:18.237422 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:53:18.237775 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:53:18.238475 systemd[1]: kubelet.service: Consumed 308ms CPU time, 108.9M memory peak. Nov 5 15:53:25.207340 containerd[1641]: time="2025-11-05T15:53:25.207211560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:25.210147 containerd[1641]: time="2025-11-05T15:53:25.210093964Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=73514593" Nov 5 15:53:25.213406 containerd[1641]: time="2025-11-05T15:53:25.213343043Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:25.218500 containerd[1641]: time="2025-11-05T15:53:25.218373189Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:25.220042 containerd[1641]: time="2025-11-05T15:53:25.219948973Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 8.380186396s" Nov 5 15:53:25.220042 containerd[1641]: time="2025-11-05T15:53:25.220034352Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Nov 5 15:53:28.340433 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 5 15:53:28.343431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:28.611004 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:28.622289 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:53:28.800168 kubelet[2316]: E1105 15:53:28.799671 2316 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:53:28.804782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:53:28.805043 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:53:28.805512 systemd[1]: kubelet.service: Consumed 390ms CPU time, 110.7M memory peak. Nov 5 15:53:29.883606 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:29.883834 systemd[1]: kubelet.service: Consumed 390ms CPU time, 110.7M memory peak. Nov 5 15:53:29.886461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:29.918053 systemd[1]: Reload requested from client PID 2332 ('systemctl') (unit session-7.scope)... Nov 5 15:53:29.918072 systemd[1]: Reloading... Nov 5 15:53:30.039984 zram_generator::config[2396]: No configuration found. Nov 5 15:53:31.852323 systemd[1]: Reloading finished in 1933 ms. Nov 5 15:53:31.932055 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:53:31.932201 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:53:31.932596 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:31.932659 systemd[1]: kubelet.service: Consumed 186ms CPU time, 98.1M memory peak. Nov 5 15:53:31.934572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:32.149860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:32.166348 (kubelet)[2423]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:53:32.218858 kubelet[2423]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:53:32.218858 kubelet[2423]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:53:32.219324 kubelet[2423]: I1105 15:53:32.218911 2423 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:53:32.420010 kubelet[2423]: I1105 15:53:32.419826 2423 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:53:32.420010 kubelet[2423]: I1105 15:53:32.419863 2423 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:53:32.420010 kubelet[2423]: I1105 15:53:32.419945 2423 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:53:32.420010 kubelet[2423]: I1105 15:53:32.419960 2423 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:53:32.420245 kubelet[2423]: I1105 15:53:32.420230 2423 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:53:32.808210 kubelet[2423]: E1105 15:53:32.808060 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:53:32.809009 kubelet[2423]: I1105 15:53:32.808970 2423 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:53:32.813373 kubelet[2423]: I1105 15:53:32.813338 2423 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:53:32.819390 kubelet[2423]: I1105 15:53:32.819351 2423 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:53:32.819644 kubelet[2423]: I1105 15:53:32.819604 2423 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:53:32.819802 kubelet[2423]: I1105 15:53:32.819635 2423 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:53:32.820998 kubelet[2423]: I1105 15:53:32.819821 2423 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:53:32.820998 kubelet[2423]: I1105 15:53:32.819831 2423 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:53:32.820998 kubelet[2423]: I1105 15:53:32.819973 2423 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:53:32.910962 kubelet[2423]: I1105 15:53:32.910865 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:53:32.912222 kubelet[2423]: I1105 15:53:32.912178 2423 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:53:32.912222 kubelet[2423]: I1105 15:53:32.912222 2423 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:53:32.912375 kubelet[2423]: I1105 15:53:32.912357 2423 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:53:32.912440 kubelet[2423]: I1105 15:53:32.912406 2423 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:53:32.919815 kubelet[2423]: E1105 15:53:32.919724 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:53:32.928263 kubelet[2423]: E1105 15:53:32.928217 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:53:32.930179 kubelet[2423]: I1105 15:53:32.930154 2423 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:53:32.930730 kubelet[2423]: I1105 15:53:32.930693 2423 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:53:32.930730 kubelet[2423]: I1105 15:53:32.930726 2423 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:53:32.930833 kubelet[2423]: W1105 15:53:32.930804 2423 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:53:32.936728 kubelet[2423]: I1105 15:53:32.936333 2423 server.go:1262] "Started kubelet" Nov 5 15:53:32.936728 kubelet[2423]: I1105 15:53:32.936421 2423 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:53:32.937194 kubelet[2423]: I1105 15:53:32.936954 2423 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:53:32.937194 kubelet[2423]: I1105 15:53:32.937042 2423 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:53:32.938048 kubelet[2423]: I1105 15:53:32.938022 2423 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:53:32.938124 kubelet[2423]: I1105 15:53:32.938058 2423 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:53:32.938952 kubelet[2423]: I1105 15:53:32.938429 2423 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:53:32.938952 kubelet[2423]: I1105 15:53:32.938895 2423 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:53:32.942183 kubelet[2423]: E1105 15:53:32.942149 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:53:32.942274 kubelet[2423]: I1105 15:53:32.942197 2423 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:53:32.942421 kubelet[2423]: I1105 15:53:32.942395 2423 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:53:32.942524 kubelet[2423]: I1105 15:53:32.942503 2423 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:53:32.943041 kubelet[2423]: E1105 15:53:32.943009 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:53:32.943621 kubelet[2423]: I1105 15:53:32.943584 2423 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:53:32.943737 kubelet[2423]: I1105 15:53:32.943715 2423 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:53:32.944974 kubelet[2423]: E1105 15:53:32.944952 2423 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:53:32.945140 kubelet[2423]: I1105 15:53:32.945121 2423 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:53:32.946687 kubelet[2423]: E1105 15:53:32.944646 2423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875274680217f86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:53:32.936290182 +0000 UTC m=+0.765292738,LastTimestamp:2025-11-05 15:53:32.936290182 +0000 UTC m=+0.765292738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:53:32.956974 kubelet[2423]: E1105 15:53:32.956911 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Nov 5 15:53:32.959165 kubelet[2423]: I1105 15:53:32.959127 2423 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:53:32.959165 kubelet[2423]: I1105 15:53:32.959146 2423 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:53:32.959165 kubelet[2423]: I1105 15:53:32.959167 2423 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:53:32.965415 kubelet[2423]: I1105 15:53:32.965386 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:53:32.967026 kubelet[2423]: I1105 15:53:32.967000 2423 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:53:32.967026 kubelet[2423]: I1105 15:53:32.967027 2423 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:53:32.967115 kubelet[2423]: I1105 15:53:32.967060 2423 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:53:32.967115 kubelet[2423]: E1105 15:53:32.967104 2423 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:53:32.969101 kubelet[2423]: E1105 15:53:32.968760 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:53:33.006463 kubelet[2423]: I1105 15:53:33.006408 2423 policy_none.go:49] "None policy: Start" Nov 5 15:53:33.006463 kubelet[2423]: I1105 15:53:33.006482 2423 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:53:33.006680 kubelet[2423]: I1105 15:53:33.006511 2423 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:53:33.043184 kubelet[2423]: E1105 15:53:33.043123 2423 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:53:33.067768 kubelet[2423]: E1105 15:53:33.067592 2423 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:53:33.090562 kubelet[2423]: I1105 15:53:33.090505 2423 policy_none.go:47] "Start" Nov 5 15:53:33.095502 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:53:33.117596 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:53:33.121474 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:53:33.132482 kubelet[2423]: E1105 15:53:33.132417 2423 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:53:33.132788 kubelet[2423]: I1105 15:53:33.132741 2423 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:53:33.132840 kubelet[2423]: I1105 15:53:33.132766 2423 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:53:33.133759 kubelet[2423]: I1105 15:53:33.133706 2423 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:53:33.134176 kubelet[2423]: E1105 15:53:33.134151 2423 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:53:33.134243 kubelet[2423]: E1105 15:53:33.134206 2423 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 15:53:33.158264 kubelet[2423]: E1105 15:53:33.158188 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Nov 5 15:53:33.235203 kubelet[2423]: I1105 15:53:33.235154 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:33.238457 kubelet[2423]: E1105 15:53:33.238400 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 5 15:53:33.263301 update_engine[1628]: I20251105 15:53:33.263177 1628 update_attempter.cc:509] Updating boot flags... Nov 5 15:53:33.280235 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 5 15:53:33.292713 kubelet[2423]: E1105 15:53:33.292609 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:33.303764 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 5 15:53:33.312740 kubelet[2423]: E1105 15:53:33.312493 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:33.336994 systemd[1]: Created slice kubepods-burstable-pod803dcac67f86f61d21455eddd7f31201.slice - libcontainer container kubepods-burstable-pod803dcac67f86f61d21455eddd7f31201.slice. Nov 5 15:53:33.345380 kubelet[2423]: I1105 15:53:33.343767 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:33.345380 kubelet[2423]: I1105 15:53:33.345214 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:33.345380 kubelet[2423]: I1105 15:53:33.345250 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:33.345380 kubelet[2423]: I1105 15:53:33.345269 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:33.345380 kubelet[2423]: I1105 15:53:33.345295 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:33.345574 kubelet[2423]: I1105 15:53:33.345319 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:33.345940 kubelet[2423]: I1105 15:53:33.345666 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:33.345940 kubelet[2423]: I1105 15:53:33.345866 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:33.345940 kubelet[2423]: I1105 15:53:33.345887 2423 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:33.356968 kubelet[2423]: E1105 15:53:33.356739 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:33.441951 kubelet[2423]: I1105 15:53:33.441871 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:33.442508 kubelet[2423]: E1105 15:53:33.442470 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 5 15:53:33.561254 kubelet[2423]: E1105 15:53:33.560912 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Nov 5 15:53:33.819396 kubelet[2423]: E1105 15:53:33.819224 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:33.820339 containerd[1641]: time="2025-11-05T15:53:33.820291650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:33.844414 kubelet[2423]: I1105 15:53:33.844363 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:33.844936 kubelet[2423]: E1105 15:53:33.844882 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 5 15:53:33.861030 kubelet[2423]: E1105 15:53:33.860974 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:53:34.016184 kubelet[2423]: E1105 15:53:34.016130 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:34.016769 containerd[1641]: time="2025-11-05T15:53:34.016731477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:34.118106 kubelet[2423]: E1105 15:53:34.118050 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:53:34.252624 kubelet[2423]: E1105 15:53:34.252517 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:34.253384 containerd[1641]: time="2025-11-05T15:53:34.253137031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:803dcac67f86f61d21455eddd7f31201,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:34.287369 kubelet[2423]: E1105 15:53:34.287277 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:53:34.342140 kubelet[2423]: E1105 15:53:34.342080 2423 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:53:34.362726 kubelet[2423]: E1105 15:53:34.362633 2423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="1.6s" Nov 5 15:53:34.647123 kubelet[2423]: I1105 15:53:34.647081 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:34.647507 kubelet[2423]: E1105 15:53:34.647477 2423 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Nov 5 15:53:34.833653 kubelet[2423]: E1105 15:53:34.833595 2423 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:53:34.881345 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1862457426.mount: Deactivated successfully. Nov 5 15:53:34.890863 containerd[1641]: time="2025-11-05T15:53:34.890812007Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:53:34.894442 containerd[1641]: time="2025-11-05T15:53:34.894411613Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Nov 5 15:53:34.895972 containerd[1641]: time="2025-11-05T15:53:34.895878256Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:53:34.898731 containerd[1641]: time="2025-11-05T15:53:34.898592796Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:53:34.900171 containerd[1641]: time="2025-11-05T15:53:34.900133189Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:53:34.901995 containerd[1641]: time="2025-11-05T15:53:34.901896538Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:53:34.905996 containerd[1641]: time="2025-11-05T15:53:34.904237649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:53:34.905996 containerd[1641]: time="2025-11-05T15:53:34.905935025Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 714.63889ms" Nov 5 15:53:34.907751 containerd[1641]: time="2025-11-05T15:53:34.907718352Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 5 15:53:34.911036 containerd[1641]: time="2025-11-05T15:53:34.910981208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 517.979894ms" Nov 5 15:53:34.914311 containerd[1641]: time="2025-11-05T15:53:34.914252739Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 524.616893ms" Nov 5 15:53:34.954559 containerd[1641]: time="2025-11-05T15:53:34.954485352Z" level=info msg="connecting to shim cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161" address="unix:///run/containerd/s/a0aa013a1f7c0bcf41d8dfcdbf011c50607359b871a36623baacd185ec9d0996" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:34.971962 containerd[1641]: time="2025-11-05T15:53:34.963907813Z" level=info msg="connecting to shim 62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a" address="unix:///run/containerd/s/c39f4c08a817d4eaaa477846f38542eed8d5c2d7f063a681757651e30e057fc6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:34.997182 containerd[1641]: time="2025-11-05T15:53:34.997099843Z" level=info msg="connecting to shim 0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c" address="unix:///run/containerd/s/3dcbc04791df6993777d8623847fa1052fc21e68c86b7c19dad124144ae8b9b2" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:35.022182 systemd[1]: Started cri-containerd-cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161.scope - libcontainer container cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161. Nov 5 15:53:35.027495 systemd[1]: Started cri-containerd-62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a.scope - libcontainer container 62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a. Nov 5 15:53:35.069422 systemd[1]: Started cri-containerd-0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c.scope - libcontainer container 0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c. Nov 5 15:53:35.083309 containerd[1641]: time="2025-11-05T15:53:35.083227817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161\"" Nov 5 15:53:35.087473 kubelet[2423]: E1105 15:53:35.087432 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:35.097510 containerd[1641]: time="2025-11-05T15:53:35.097459190Z" level=info msg="CreateContainer within sandbox \"cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:53:35.108044 containerd[1641]: time="2025-11-05T15:53:35.107808719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:803dcac67f86f61d21455eddd7f31201,Namespace:kube-system,Attempt:0,} returns sandbox id \"62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a\"" Nov 5 15:53:35.108968 kubelet[2423]: E1105 15:53:35.108938 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:35.115822 containerd[1641]: time="2025-11-05T15:53:35.115750271Z" level=info msg="CreateContainer within sandbox \"62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:53:35.123120 containerd[1641]: time="2025-11-05T15:53:35.123049231Z" level=info msg="Container 7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:35.129814 containerd[1641]: time="2025-11-05T15:53:35.129732238Z" level=info msg="Container 9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:35.137152 containerd[1641]: time="2025-11-05T15:53:35.136961667Z" level=info msg="CreateContainer within sandbox \"cae026293ed419df38611fc71cfe9e32658cf15549741bdc97d09f9cb07d9161\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7\"" Nov 5 15:53:35.144553 containerd[1641]: time="2025-11-05T15:53:35.144457485Z" level=info msg="StartContainer for \"7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7\"" Nov 5 15:53:35.146007 containerd[1641]: time="2025-11-05T15:53:35.145972381Z" level=info msg="CreateContainer within sandbox \"62525587beb69da1a2b55ea1dcbc5711699a42cd8a3a8da426e71f8c5906887a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e\"" Nov 5 15:53:35.146956 containerd[1641]: time="2025-11-05T15:53:35.146873447Z" level=info msg="connecting to shim 7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7" address="unix:///run/containerd/s/a0aa013a1f7c0bcf41d8dfcdbf011c50607359b871a36623baacd185ec9d0996" protocol=ttrpc version=3 Nov 5 15:53:35.151058 containerd[1641]: time="2025-11-05T15:53:35.147356060Z" level=info msg="StartContainer for \"9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e\"" Nov 5 15:53:35.157312 containerd[1641]: time="2025-11-05T15:53:35.157238935Z" level=info msg="connecting to shim 9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e" address="unix:///run/containerd/s/c39f4c08a817d4eaaa477846f38542eed8d5c2d7f063a681757651e30e057fc6" protocol=ttrpc version=3 Nov 5 15:53:35.190291 containerd[1641]: time="2025-11-05T15:53:35.190226080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c\"" Nov 5 15:53:35.191345 kubelet[2423]: E1105 15:53:35.191296 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:35.196889 containerd[1641]: time="2025-11-05T15:53:35.196847802Z" level=info msg="CreateContainer within sandbox \"0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:53:35.198136 systemd[1]: Started cri-containerd-7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7.scope - libcontainer container 7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7. Nov 5 15:53:35.203828 systemd[1]: Started cri-containerd-9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e.scope - libcontainer container 9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e. Nov 5 15:53:35.213860 containerd[1641]: time="2025-11-05T15:53:35.213802081Z" level=info msg="Container 20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:35.227148 containerd[1641]: time="2025-11-05T15:53:35.227001222Z" level=info msg="CreateContainer within sandbox \"0a56516d19a33ec2dad306a90555f6bf3312fac22d02b53081170a18b1bc1e5c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33\"" Nov 5 15:53:35.227911 containerd[1641]: time="2025-11-05T15:53:35.227853418Z" level=info msg="StartContainer for \"20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33\"" Nov 5 15:53:35.229859 containerd[1641]: time="2025-11-05T15:53:35.229779181Z" level=info msg="connecting to shim 20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33" address="unix:///run/containerd/s/3dcbc04791df6993777d8623847fa1052fc21e68c86b7c19dad124144ae8b9b2" protocol=ttrpc version=3 Nov 5 15:53:35.254112 systemd[1]: Started cri-containerd-20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33.scope - libcontainer container 20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33. Nov 5 15:53:35.284749 containerd[1641]: time="2025-11-05T15:53:35.284702609Z" level=info msg="StartContainer for \"7df22975141ca63500cdd5881a68dc7d687b99dad747f76abd4b76c0ca3a2bb7\" returns successfully" Nov 5 15:53:35.290984 containerd[1641]: time="2025-11-05T15:53:35.289886681Z" level=info msg="StartContainer for \"9070d779b2dc58199e3066e53f57d7a55e3ab38d5d173eda7d8ef7cf3923de8e\" returns successfully" Nov 5 15:53:35.387959 containerd[1641]: time="2025-11-05T15:53:35.386662873Z" level=info msg="StartContainer for \"20a7c0bacb31f8d31c10e88e23c4a2c755bed1593d1865ec484826b6a3241f33\" returns successfully" Nov 5 15:53:36.005156 kubelet[2423]: E1105 15:53:36.005119 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:36.005882 kubelet[2423]: E1105 15:53:36.005796 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:36.009711 kubelet[2423]: E1105 15:53:36.009650 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:36.010113 kubelet[2423]: E1105 15:53:36.010059 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:36.011460 kubelet[2423]: E1105 15:53:36.011300 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:36.011595 kubelet[2423]: E1105 15:53:36.011578 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:36.265812 kubelet[2423]: I1105 15:53:36.264857 2423 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:37.013754 kubelet[2423]: E1105 15:53:37.013330 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:37.013754 kubelet[2423]: E1105 15:53:37.013527 2423 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 15:53:37.013754 kubelet[2423]: E1105 15:53:37.013629 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:37.014493 kubelet[2423]: E1105 15:53:37.013777 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:37.977790 kubelet[2423]: E1105 15:53:37.977716 2423 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 15:53:38.240107 kubelet[2423]: E1105 15:53:38.239807 2423 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1875274680217f86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 15:53:32.936290182 +0000 UTC m=+0.765292738,LastTimestamp:2025-11-05 15:53:32.936290182 +0000 UTC m=+0.765292738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 15:53:38.242735 kubelet[2423]: I1105 15:53:38.242708 2423 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:53:38.254381 kubelet[2423]: I1105 15:53:38.254326 2423 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:38.531592 kubelet[2423]: E1105 15:53:38.531082 2423 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:38.531592 kubelet[2423]: I1105 15:53:38.531164 2423 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:38.533448 kubelet[2423]: E1105 15:53:38.533405 2423 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:38.533448 kubelet[2423]: I1105 15:53:38.533445 2423 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:38.535322 kubelet[2423]: E1105 15:53:38.535298 2423 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:38.916585 kubelet[2423]: I1105 15:53:38.916526 2423 apiserver.go:52] "Watching apiserver" Nov 5 15:53:38.942573 kubelet[2423]: I1105 15:53:38.942533 2423 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:53:39.852563 kubelet[2423]: I1105 15:53:39.852490 2423 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:39.910248 kubelet[2423]: E1105 15:53:39.910178 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:40.018456 kubelet[2423]: E1105 15:53:40.018108 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:41.095564 kubelet[2423]: I1105 15:53:41.095498 2423 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:41.103614 kubelet[2423]: E1105 15:53:41.103546 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:41.657669 systemd[1]: Reload requested from client PID 2724 ('systemctl') (unit session-7.scope)... Nov 5 15:53:41.657687 systemd[1]: Reloading... Nov 5 15:53:41.765019 zram_generator::config[2768]: No configuration found. Nov 5 15:53:42.021817 kubelet[2423]: E1105 15:53:42.021689 2423 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:42.041683 systemd[1]: Reloading finished in 383 ms. Nov 5 15:53:42.081822 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:42.099124 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:53:42.099547 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:42.099624 systemd[1]: kubelet.service: Consumed 1.003s CPU time, 126.2M memory peak. Nov 5 15:53:42.102201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:53:42.352667 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:53:42.372483 (kubelet)[2813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:53:42.445006 kubelet[2813]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:53:42.445006 kubelet[2813]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:53:42.445448 kubelet[2813]: I1105 15:53:42.445053 2813 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:53:42.451982 kubelet[2813]: I1105 15:53:42.451938 2813 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 5 15:53:42.451982 kubelet[2813]: I1105 15:53:42.451960 2813 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:53:42.451982 kubelet[2813]: I1105 15:53:42.451987 2813 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 5 15:53:42.451982 kubelet[2813]: I1105 15:53:42.451994 2813 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:53:42.452254 kubelet[2813]: I1105 15:53:42.452199 2813 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:53:42.453556 kubelet[2813]: I1105 15:53:42.453520 2813 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:53:42.455792 kubelet[2813]: I1105 15:53:42.455757 2813 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:53:42.463979 kubelet[2813]: I1105 15:53:42.463084 2813 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:53:42.468316 kubelet[2813]: I1105 15:53:42.468274 2813 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 5 15:53:42.468540 kubelet[2813]: I1105 15:53:42.468497 2813 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:53:42.468708 kubelet[2813]: I1105 15:53:42.468530 2813 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:53:42.468708 kubelet[2813]: I1105 15:53:42.468708 2813 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:53:42.468806 kubelet[2813]: I1105 15:53:42.468718 2813 container_manager_linux.go:306] "Creating device plugin manager" Nov 5 15:53:42.468806 kubelet[2813]: I1105 15:53:42.468756 2813 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 5 15:53:42.470600 kubelet[2813]: I1105 15:53:42.470567 2813 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:53:42.470754 kubelet[2813]: I1105 15:53:42.470739 2813 kubelet.go:475] "Attempting to sync node with API server" Nov 5 15:53:42.470754 kubelet[2813]: I1105 15:53:42.470754 2813 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:53:42.470825 kubelet[2813]: I1105 15:53:42.470781 2813 kubelet.go:387] "Adding apiserver pod source" Nov 5 15:53:42.470929 kubelet[2813]: I1105 15:53:42.470900 2813 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:53:42.472021 kubelet[2813]: I1105 15:53:42.471956 2813 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:53:42.472839 kubelet[2813]: I1105 15:53:42.472822 2813 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:53:42.472971 kubelet[2813]: I1105 15:53:42.472886 2813 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 5 15:53:42.479176 kubelet[2813]: I1105 15:53:42.479135 2813 server.go:1262] "Started kubelet" Nov 5 15:53:42.480414 kubelet[2813]: I1105 15:53:42.480341 2813 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:53:42.480679 kubelet[2813]: I1105 15:53:42.480662 2813 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 5 15:53:42.481855 kubelet[2813]: I1105 15:53:42.481833 2813 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:53:42.482188 kubelet[2813]: I1105 15:53:42.482161 2813 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:53:42.482484 kubelet[2813]: I1105 15:53:42.482441 2813 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:53:42.482829 kubelet[2813]: I1105 15:53:42.482799 2813 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:53:42.485120 kubelet[2813]: I1105 15:53:42.485094 2813 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 5 15:53:42.489280 kubelet[2813]: E1105 15:53:42.488403 2813 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 15:53:42.489280 kubelet[2813]: I1105 15:53:42.489166 2813 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 5 15:53:42.489280 kubelet[2813]: I1105 15:53:42.487775 2813 server.go:310] "Adding debug handlers to kubelet server" Nov 5 15:53:42.489725 kubelet[2813]: I1105 15:53:42.489581 2813 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:53:42.489902 kubelet[2813]: I1105 15:53:42.489870 2813 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:53:42.490168 kubelet[2813]: I1105 15:53:42.490152 2813 reconciler.go:29] "Reconciler: start to sync state" Nov 5 15:53:42.497508 kubelet[2813]: E1105 15:53:42.497438 2813 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:53:42.497877 kubelet[2813]: I1105 15:53:42.497859 2813 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:53:42.547535 kubelet[2813]: I1105 15:53:42.547473 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 5 15:53:42.549330 kubelet[2813]: I1105 15:53:42.549299 2813 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 5 15:53:42.549330 kubelet[2813]: I1105 15:53:42.549332 2813 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 5 15:53:42.549406 kubelet[2813]: I1105 15:53:42.549356 2813 kubelet.go:2427] "Starting kubelet main sync loop" Nov 5 15:53:42.549433 kubelet[2813]: E1105 15:53:42.549399 2813 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:53:42.572683 kubelet[2813]: I1105 15:53:42.572637 2813 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:53:42.572683 kubelet[2813]: I1105 15:53:42.572657 2813 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:53:42.572683 kubelet[2813]: I1105 15:53:42.572676 2813 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:53:42.572902 kubelet[2813]: I1105 15:53:42.572802 2813 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:53:42.572902 kubelet[2813]: I1105 15:53:42.572811 2813 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:53:42.572902 kubelet[2813]: I1105 15:53:42.572827 2813 policy_none.go:49] "None policy: Start" Nov 5 15:53:42.572902 kubelet[2813]: I1105 15:53:42.572836 2813 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 5 15:53:42.572902 kubelet[2813]: I1105 15:53:42.572846 2813 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 5 15:53:42.573060 kubelet[2813]: I1105 15:53:42.573008 2813 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 5 15:53:42.573060 kubelet[2813]: I1105 15:53:42.573021 2813 policy_none.go:47] "Start" Nov 5 15:53:42.578285 kubelet[2813]: E1105 15:53:42.578241 2813 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:53:42.578494 kubelet[2813]: I1105 15:53:42.578479 2813 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:53:42.578521 kubelet[2813]: I1105 15:53:42.578494 2813 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:53:42.578893 kubelet[2813]: I1105 15:53:42.578766 2813 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:53:42.581150 kubelet[2813]: E1105 15:53:42.581114 2813 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:53:42.651230 kubelet[2813]: I1105 15:53:42.651079 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.653241 kubelet[2813]: I1105 15:53:42.653217 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:42.653508 kubelet[2813]: I1105 15:53:42.653482 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:42.664066 kubelet[2813]: E1105 15:53:42.664012 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:42.664495 kubelet[2813]: E1105 15:53:42.664460 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.684619 kubelet[2813]: I1105 15:53:42.684563 2813 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 15:53:42.690907 kubelet[2813]: I1105 15:53:42.690837 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:42.690907 kubelet[2813]: I1105 15:53:42.690899 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.691150 kubelet[2813]: I1105 15:53:42.690949 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.691150 kubelet[2813]: I1105 15:53:42.690979 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.691150 kubelet[2813]: I1105 15:53:42.691007 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:42.691150 kubelet[2813]: I1105 15:53:42.691025 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:42.691150 kubelet[2813]: I1105 15:53:42.691048 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/803dcac67f86f61d21455eddd7f31201-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"803dcac67f86f61d21455eddd7f31201\") " pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:42.691327 kubelet[2813]: I1105 15:53:42.691067 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.691327 kubelet[2813]: I1105 15:53:42.691086 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 15:53:42.697040 kubelet[2813]: I1105 15:53:42.696988 2813 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 15:53:42.697196 kubelet[2813]: I1105 15:53:42.697106 2813 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 15:53:42.965816 kubelet[2813]: E1105 15:53:42.965468 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:42.965816 kubelet[2813]: E1105 15:53:42.965528 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:42.965816 kubelet[2813]: E1105 15:53:42.965561 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:43.471767 kubelet[2813]: I1105 15:53:43.471719 2813 apiserver.go:52] "Watching apiserver" Nov 5 15:53:43.489885 kubelet[2813]: I1105 15:53:43.489814 2813 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 5 15:53:43.565244 kubelet[2813]: I1105 15:53:43.564165 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:43.565244 kubelet[2813]: E1105 15:53:43.564355 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:43.565244 kubelet[2813]: I1105 15:53:43.564382 2813 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:43.571941 kubelet[2813]: E1105 15:53:43.571871 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 5 15:53:43.572139 kubelet[2813]: E1105 15:53:43.572119 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:43.573292 kubelet[2813]: E1105 15:53:43.573089 2813 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 15:53:43.573292 kubelet[2813]: E1105 15:53:43.573231 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:43.588282 kubelet[2813]: I1105 15:53:43.588191 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.588168081 podStartE2EDuration="2.588168081s" podCreationTimestamp="2025-11-05 15:53:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:43.58783662 +0000 UTC m=+1.180712349" watchObservedRunningTime="2025-11-05 15:53:43.588168081 +0000 UTC m=+1.181043810" Nov 5 15:53:43.606071 kubelet[2813]: I1105 15:53:43.605629 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.605606765 podStartE2EDuration="4.605606765s" podCreationTimestamp="2025-11-05 15:53:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:43.597518094 +0000 UTC m=+1.190393823" watchObservedRunningTime="2025-11-05 15:53:43.605606765 +0000 UTC m=+1.198482494" Nov 5 15:53:43.606359 kubelet[2813]: I1105 15:53:43.606165 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.606158217 podStartE2EDuration="1.606158217s" podCreationTimestamp="2025-11-05 15:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:43.606148218 +0000 UTC m=+1.199023947" watchObservedRunningTime="2025-11-05 15:53:43.606158217 +0000 UTC m=+1.199033946" Nov 5 15:53:44.566896 kubelet[2813]: E1105 15:53:44.566853 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:44.567439 kubelet[2813]: E1105 15:53:44.567073 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:45.570181 kubelet[2813]: E1105 15:53:45.570124 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:45.570181 kubelet[2813]: E1105 15:53:45.570139 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:46.307808 kubelet[2813]: I1105 15:53:46.307765 2813 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:53:46.308554 containerd[1641]: time="2025-11-05T15:53:46.308506353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:53:46.309069 kubelet[2813]: I1105 15:53:46.308745 2813 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:53:47.116971 kubelet[2813]: I1105 15:53:47.116693 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7433a127-3214-4f95-a6a8-ae475ffc7d85-xtables-lock\") pod \"kube-proxy-4d7hg\" (UID: \"7433a127-3214-4f95-a6a8-ae475ffc7d85\") " pod="kube-system/kube-proxy-4d7hg" Nov 5 15:53:47.116971 kubelet[2813]: I1105 15:53:47.116733 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7433a127-3214-4f95-a6a8-ae475ffc7d85-lib-modules\") pod \"kube-proxy-4d7hg\" (UID: \"7433a127-3214-4f95-a6a8-ae475ffc7d85\") " pod="kube-system/kube-proxy-4d7hg" Nov 5 15:53:47.116971 kubelet[2813]: I1105 15:53:47.116755 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7433a127-3214-4f95-a6a8-ae475ffc7d85-kube-proxy\") pod \"kube-proxy-4d7hg\" (UID: \"7433a127-3214-4f95-a6a8-ae475ffc7d85\") " pod="kube-system/kube-proxy-4d7hg" Nov 5 15:53:47.116971 kubelet[2813]: I1105 15:53:47.116776 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95s8m\" (UniqueName: \"kubernetes.io/projected/7433a127-3214-4f95-a6a8-ae475ffc7d85-kube-api-access-95s8m\") pod \"kube-proxy-4d7hg\" (UID: \"7433a127-3214-4f95-a6a8-ae475ffc7d85\") " pod="kube-system/kube-proxy-4d7hg" Nov 5 15:53:47.123090 systemd[1]: Created slice kubepods-besteffort-pod7433a127_3214_4f95_a6a8_ae475ffc7d85.slice - libcontainer container kubepods-besteffort-pod7433a127_3214_4f95_a6a8_ae475ffc7d85.slice. Nov 5 15:53:47.493703 kubelet[2813]: E1105 15:53:47.493205 2813 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 5 15:53:47.493703 kubelet[2813]: E1105 15:53:47.493256 2813 projected.go:196] Error preparing data for projected volume kube-api-access-95s8m for pod kube-system/kube-proxy-4d7hg: configmap "kube-root-ca.crt" not found Nov 5 15:53:47.493703 kubelet[2813]: E1105 15:53:47.493348 2813 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7433a127-3214-4f95-a6a8-ae475ffc7d85-kube-api-access-95s8m podName:7433a127-3214-4f95-a6a8-ae475ffc7d85 nodeName:}" failed. No retries permitted until 2025-11-05 15:53:47.993323237 +0000 UTC m=+5.586198966 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-95s8m" (UniqueName: "kubernetes.io/projected/7433a127-3214-4f95-a6a8-ae475ffc7d85-kube-api-access-95s8m") pod "kube-proxy-4d7hg" (UID: "7433a127-3214-4f95-a6a8-ae475ffc7d85") : configmap "kube-root-ca.crt" not found Nov 5 15:53:47.607235 kubelet[2813]: E1105 15:53:47.607171 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:47.867080 systemd[1]: Created slice kubepods-besteffort-podcacfacfc_1f6f_4d3c_a36c_b7712358d769.slice - libcontainer container kubepods-besteffort-podcacfacfc_1f6f_4d3c_a36c_b7712358d769.slice. Nov 5 15:53:47.923065 kubelet[2813]: I1105 15:53:47.922963 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cacfacfc-1f6f-4d3c-a36c-b7712358d769-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-wtsnh\" (UID: \"cacfacfc-1f6f-4d3c-a36c-b7712358d769\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wtsnh" Nov 5 15:53:47.923065 kubelet[2813]: I1105 15:53:47.923020 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bhwq\" (UniqueName: \"kubernetes.io/projected/cacfacfc-1f6f-4d3c-a36c-b7712358d769-kube-api-access-6bhwq\") pod \"tigera-operator-65cdcdfd6d-wtsnh\" (UID: \"cacfacfc-1f6f-4d3c-a36c-b7712358d769\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-wtsnh" Nov 5 15:53:48.039878 kubelet[2813]: E1105 15:53:48.039814 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:48.040796 containerd[1641]: time="2025-11-05T15:53:48.040756661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4d7hg,Uid:7433a127-3214-4f95-a6a8-ae475ffc7d85,Namespace:kube-system,Attempt:0,}" Nov 5 15:53:48.068173 containerd[1641]: time="2025-11-05T15:53:48.068102207Z" level=info msg="connecting to shim 90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a" address="unix:///run/containerd/s/7de7e30a4cfdfdb11b29d09f0a193c4d48ca928cb7541689cbd88e07457b7df5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:48.103125 systemd[1]: Started cri-containerd-90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a.scope - libcontainer container 90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a. Nov 5 15:53:48.136270 containerd[1641]: time="2025-11-05T15:53:48.136123226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4d7hg,Uid:7433a127-3214-4f95-a6a8-ae475ffc7d85,Namespace:kube-system,Attempt:0,} returns sandbox id \"90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a\"" Nov 5 15:53:48.137309 kubelet[2813]: E1105 15:53:48.137274 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:48.159998 containerd[1641]: time="2025-11-05T15:53:48.159901870Z" level=info msg="CreateContainer within sandbox \"90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:53:48.177385 containerd[1641]: time="2025-11-05T15:53:48.177344382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wtsnh,Uid:cacfacfc-1f6f-4d3c-a36c-b7712358d769,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:53:48.177761 containerd[1641]: time="2025-11-05T15:53:48.177384636Z" level=info msg="Container 890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:48.192938 containerd[1641]: time="2025-11-05T15:53:48.192865503Z" level=info msg="CreateContainer within sandbox \"90be700734141a9341b3939b8081a6e5555b9a9ad94d2f2e1d34d28dcb9aa36a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7\"" Nov 5 15:53:48.193820 containerd[1641]: time="2025-11-05T15:53:48.193599729Z" level=info msg="StartContainer for \"890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7\"" Nov 5 15:53:48.195361 containerd[1641]: time="2025-11-05T15:53:48.195332625Z" level=info msg="connecting to shim 890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7" address="unix:///run/containerd/s/7de7e30a4cfdfdb11b29d09f0a193c4d48ca928cb7541689cbd88e07457b7df5" protocol=ttrpc version=3 Nov 5 15:53:48.220107 containerd[1641]: time="2025-11-05T15:53:48.220043836Z" level=info msg="connecting to shim 392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628" address="unix:///run/containerd/s/caf420880ceedfb3b6aaff5c2052ab712f07954f6bf12785fc5598ad00198653" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:53:48.221372 systemd[1]: Started cri-containerd-890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7.scope - libcontainer container 890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7. Nov 5 15:53:48.257328 systemd[1]: Started cri-containerd-392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628.scope - libcontainer container 392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628. Nov 5 15:53:48.300372 containerd[1641]: time="2025-11-05T15:53:48.300316321Z" level=info msg="StartContainer for \"890220d7fa662b7ad180d586c3d1385f3f028f514d5055b6ec09838950d912e7\" returns successfully" Nov 5 15:53:48.453486 containerd[1641]: time="2025-11-05T15:53:48.453334497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-wtsnh,Uid:cacfacfc-1f6f-4d3c-a36c-b7712358d769,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628\"" Nov 5 15:53:48.455307 containerd[1641]: time="2025-11-05T15:53:48.455278820Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:53:48.579276 kubelet[2813]: E1105 15:53:48.579215 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:48.579857 kubelet[2813]: E1105 15:53:48.579820 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:50.890137 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount307327755.mount: Deactivated successfully. Nov 5 15:53:51.292288 containerd[1641]: time="2025-11-05T15:53:51.292123584Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:51.293113 containerd[1641]: time="2025-11-05T15:53:51.293064547Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Nov 5 15:53:51.294536 containerd[1641]: time="2025-11-05T15:53:51.294477464Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:51.297468 containerd[1641]: time="2025-11-05T15:53:51.297422884Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:53:51.298639 containerd[1641]: time="2025-11-05T15:53:51.298601794Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.843290895s" Nov 5 15:53:51.298706 containerd[1641]: time="2025-11-05T15:53:51.298637911Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Nov 5 15:53:51.306237 containerd[1641]: time="2025-11-05T15:53:51.306187579Z" level=info msg="CreateContainer within sandbox \"392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:53:51.318180 containerd[1641]: time="2025-11-05T15:53:51.318116673Z" level=info msg="Container 4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:53:51.326187 containerd[1641]: time="2025-11-05T15:53:51.326129309Z" level=info msg="CreateContainer within sandbox \"392aa85ea116e36aaa87851f67b920b1591e01bdf5db847dae2c6aba5b9ea628\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0\"" Nov 5 15:53:51.326898 containerd[1641]: time="2025-11-05T15:53:51.326837366Z" level=info msg="StartContainer for \"4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0\"" Nov 5 15:53:51.328073 containerd[1641]: time="2025-11-05T15:53:51.328012709Z" level=info msg="connecting to shim 4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0" address="unix:///run/containerd/s/caf420880ceedfb3b6aaff5c2052ab712f07954f6bf12785fc5598ad00198653" protocol=ttrpc version=3 Nov 5 15:53:51.398122 systemd[1]: Started cri-containerd-4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0.scope - libcontainer container 4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0. Nov 5 15:53:51.442487 containerd[1641]: time="2025-11-05T15:53:51.442416849Z" level=info msg="StartContainer for \"4ab7cf661db58e350b47f8c6b2ce2c5dec5c9ad368ddb057adaa0d848182b2b0\" returns successfully" Nov 5 15:53:51.597632 kubelet[2813]: I1105 15:53:51.597531 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4d7hg" podStartSLOduration=4.597510619 podStartE2EDuration="4.597510619s" podCreationTimestamp="2025-11-05 15:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:53:48.674963417 +0000 UTC m=+6.267839146" watchObservedRunningTime="2025-11-05 15:53:51.597510619 +0000 UTC m=+9.190386348" Nov 5 15:53:51.598313 kubelet[2813]: I1105 15:53:51.597666 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-wtsnh" podStartSLOduration=1.753004759 podStartE2EDuration="4.597660901s" podCreationTimestamp="2025-11-05 15:53:47 +0000 UTC" firstStartedPulling="2025-11-05 15:53:48.45484134 +0000 UTC m=+6.047717069" lastFinishedPulling="2025-11-05 15:53:51.299497482 +0000 UTC m=+8.892373211" observedRunningTime="2025-11-05 15:53:51.59743606 +0000 UTC m=+9.190311789" watchObservedRunningTime="2025-11-05 15:53:51.597660901 +0000 UTC m=+9.190536640" Nov 5 15:53:53.136474 kubelet[2813]: E1105 15:53:53.136419 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:53.601771 kubelet[2813]: E1105 15:53:53.601702 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:53:54.051894 kubelet[2813]: E1105 15:53:54.044818 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:01.100666 sudo[1839]: pam_unix(sudo:session): session closed for user root Nov 5 15:54:01.102848 sshd[1838]: Connection closed by 10.0.0.1 port 42718 Nov 5 15:54:01.106519 sshd-session[1835]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:01.111400 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:42718.service: Deactivated successfully. Nov 5 15:54:01.113815 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:54:01.114092 systemd[1]: session-7.scope: Consumed 9.017s CPU time, 222M memory peak. Nov 5 15:54:01.115428 systemd-logind[1620]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:54:01.116972 systemd-logind[1620]: Removed session 7. Nov 5 15:54:13.576611 systemd[1]: Created slice kubepods-besteffort-pod4cfef64a_c898_47be_a444_83817f908681.slice - libcontainer container kubepods-besteffort-pod4cfef64a_c898_47be_a444_83817f908681.slice. Nov 5 15:54:13.606517 kubelet[2813]: I1105 15:54:13.606421 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4cfef64a-c898-47be-a444-83817f908681-tigera-ca-bundle\") pod \"calico-typha-5bdc44466b-w6bvh\" (UID: \"4cfef64a-c898-47be-a444-83817f908681\") " pod="calico-system/calico-typha-5bdc44466b-w6bvh" Nov 5 15:54:13.606517 kubelet[2813]: I1105 15:54:13.606513 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4cfef64a-c898-47be-a444-83817f908681-typha-certs\") pod \"calico-typha-5bdc44466b-w6bvh\" (UID: \"4cfef64a-c898-47be-a444-83817f908681\") " pod="calico-system/calico-typha-5bdc44466b-w6bvh" Nov 5 15:54:13.607192 kubelet[2813]: I1105 15:54:13.606556 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqrng\" (UniqueName: \"kubernetes.io/projected/4cfef64a-c898-47be-a444-83817f908681-kube-api-access-mqrng\") pod \"calico-typha-5bdc44466b-w6bvh\" (UID: \"4cfef64a-c898-47be-a444-83817f908681\") " pod="calico-system/calico-typha-5bdc44466b-w6bvh" Nov 5 15:54:13.748963 systemd[1]: Created slice kubepods-besteffort-pod7a1db646_8f37_407f_8226_cb551f93fc27.slice - libcontainer container kubepods-besteffort-pod7a1db646_8f37_407f_8226_cb551f93fc27.slice. Nov 5 15:54:13.809952 kubelet[2813]: I1105 15:54:13.809850 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-flexvol-driver-host\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.809952 kubelet[2813]: I1105 15:54:13.809946 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7a1db646-8f37-407f-8226-cb551f93fc27-node-certs\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810175 kubelet[2813]: I1105 15:54:13.809971 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7a1db646-8f37-407f-8226-cb551f93fc27-tigera-ca-bundle\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810175 kubelet[2813]: I1105 15:54:13.810056 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-var-lib-calico\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810175 kubelet[2813]: I1105 15:54:13.810152 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-cni-log-dir\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810282 kubelet[2813]: I1105 15:54:13.810180 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-cni-net-dir\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810317 kubelet[2813]: I1105 15:54:13.810279 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-xtables-lock\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810317 kubelet[2813]: I1105 15:54:13.810311 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-var-run-calico\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810377 kubelet[2813]: I1105 15:54:13.810336 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrgd4\" (UniqueName: \"kubernetes.io/projected/7a1db646-8f37-407f-8226-cb551f93fc27-kube-api-access-hrgd4\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810377 kubelet[2813]: I1105 15:54:13.810376 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-policysync\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810377 kubelet[2813]: I1105 15:54:13.810424 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-lib-modules\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.810675 kubelet[2813]: I1105 15:54:13.810537 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7a1db646-8f37-407f-8226-cb551f93fc27-cni-bin-dir\") pod \"calico-node-pmv2p\" (UID: \"7a1db646-8f37-407f-8226-cb551f93fc27\") " pod="calico-system/calico-node-pmv2p" Nov 5 15:54:13.867390 kubelet[2813]: E1105 15:54:13.867298 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:13.900227 kubelet[2813]: E1105 15:54:13.900159 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:13.901469 containerd[1641]: time="2025-11-05T15:54:13.901396544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdc44466b-w6bvh,Uid:4cfef64a-c898-47be-a444-83817f908681,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:13.910959 kubelet[2813]: I1105 15:54:13.910880 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5cbe1702-972a-4f84-9d2f-51b96b54edda-varrun\") pod \"csi-node-driver-gf82q\" (UID: \"5cbe1702-972a-4f84-9d2f-51b96b54edda\") " pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:13.911435 kubelet[2813]: I1105 15:54:13.911412 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5cbe1702-972a-4f84-9d2f-51b96b54edda-registration-dir\") pod \"csi-node-driver-gf82q\" (UID: \"5cbe1702-972a-4f84-9d2f-51b96b54edda\") " pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:13.911666 kubelet[2813]: I1105 15:54:13.911625 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5n4s9\" (UniqueName: \"kubernetes.io/projected/5cbe1702-972a-4f84-9d2f-51b96b54edda-kube-api-access-5n4s9\") pod \"csi-node-driver-gf82q\" (UID: \"5cbe1702-972a-4f84-9d2f-51b96b54edda\") " pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:13.911967 kubelet[2813]: I1105 15:54:13.911897 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cbe1702-972a-4f84-9d2f-51b96b54edda-kubelet-dir\") pod \"csi-node-driver-gf82q\" (UID: \"5cbe1702-972a-4f84-9d2f-51b96b54edda\") " pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:13.912146 kubelet[2813]: I1105 15:54:13.912055 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5cbe1702-972a-4f84-9d2f-51b96b54edda-socket-dir\") pod \"csi-node-driver-gf82q\" (UID: \"5cbe1702-972a-4f84-9d2f-51b96b54edda\") " pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:13.925404 kubelet[2813]: E1105 15:54:13.925319 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:13.925404 kubelet[2813]: W1105 15:54:13.925454 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:13.925404 kubelet[2813]: E1105 15:54:13.925502 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:13.929413 kubelet[2813]: E1105 15:54:13.929343 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:13.929670 kubelet[2813]: W1105 15:54:13.929369 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:13.929670 kubelet[2813]: E1105 15:54:13.929591 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:13.950432 kubelet[2813]: E1105 15:54:13.950399 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:13.950857 kubelet[2813]: W1105 15:54:13.950707 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:13.950857 kubelet[2813]: E1105 15:54:13.950738 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:13.961855 kubelet[2813]: E1105 15:54:13.961793 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:13.961855 kubelet[2813]: W1105 15:54:13.961844 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:13.962059 kubelet[2813]: E1105 15:54:13.961875 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.015365 kubelet[2813]: E1105 15:54:14.015314 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.015365 kubelet[2813]: W1105 15:54:14.015351 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.015597 kubelet[2813]: E1105 15:54:14.015398 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.016231 kubelet[2813]: E1105 15:54:14.016201 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.016231 kubelet[2813]: W1105 15:54:14.016226 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.016339 kubelet[2813]: E1105 15:54:14.016240 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.019604 kubelet[2813]: E1105 15:54:14.019560 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.019604 kubelet[2813]: W1105 15:54:14.019593 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.019798 kubelet[2813]: E1105 15:54:14.019613 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.022576 kubelet[2813]: E1105 15:54:14.022536 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.022576 kubelet[2813]: W1105 15:54:14.022566 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.022790 kubelet[2813]: E1105 15:54:14.022595 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.028437 kubelet[2813]: E1105 15:54:14.025537 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.028437 kubelet[2813]: W1105 15:54:14.025568 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.028437 kubelet[2813]: E1105 15:54:14.025594 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.029057 kubelet[2813]: E1105 15:54:14.029025 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.029391 kubelet[2813]: W1105 15:54:14.029175 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.029391 kubelet[2813]: E1105 15:54:14.029215 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.029634 kubelet[2813]: E1105 15:54:14.029614 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.029695 kubelet[2813]: W1105 15:54:14.029683 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.029756 kubelet[2813]: E1105 15:54:14.029745 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.029999 kubelet[2813]: E1105 15:54:14.029986 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.030072 kubelet[2813]: W1105 15:54:14.030059 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.030131 kubelet[2813]: E1105 15:54:14.030121 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.031084 kubelet[2813]: E1105 15:54:14.030325 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.031084 kubelet[2813]: W1105 15:54:14.030337 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.031084 kubelet[2813]: E1105 15:54:14.030346 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.031210 kubelet[2813]: E1105 15:54:14.031093 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.031210 kubelet[2813]: W1105 15:54:14.031108 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.031210 kubelet[2813]: E1105 15:54:14.031121 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.031504 kubelet[2813]: E1105 15:54:14.031480 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.031504 kubelet[2813]: W1105 15:54:14.031501 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.031575 kubelet[2813]: E1105 15:54:14.031516 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.032479 kubelet[2813]: E1105 15:54:14.032060 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.032479 kubelet[2813]: W1105 15:54:14.032082 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.032479 kubelet[2813]: E1105 15:54:14.032096 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.033103 kubelet[2813]: E1105 15:54:14.033070 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.033171 kubelet[2813]: W1105 15:54:14.033097 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.033171 kubelet[2813]: E1105 15:54:14.033122 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.033555 kubelet[2813]: E1105 15:54:14.033530 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.033652 kubelet[2813]: W1105 15:54:14.033553 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.033652 kubelet[2813]: E1105 15:54:14.033569 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.036632 kubelet[2813]: E1105 15:54:14.036588 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.036632 kubelet[2813]: W1105 15:54:14.036608 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.036632 kubelet[2813]: E1105 15:54:14.036636 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.037275 kubelet[2813]: E1105 15:54:14.037119 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.037275 kubelet[2813]: W1105 15:54:14.037134 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.037275 kubelet[2813]: E1105 15:54:14.037157 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.037525 kubelet[2813]: E1105 15:54:14.037356 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.037525 kubelet[2813]: W1105 15:54:14.037367 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.037525 kubelet[2813]: E1105 15:54:14.037390 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.038077 kubelet[2813]: E1105 15:54:14.037578 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.038077 kubelet[2813]: W1105 15:54:14.037588 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.038077 kubelet[2813]: E1105 15:54:14.037603 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.038077 kubelet[2813]: E1105 15:54:14.037903 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.038077 kubelet[2813]: W1105 15:54:14.037916 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.038077 kubelet[2813]: E1105 15:54:14.037969 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.038415 kubelet[2813]: E1105 15:54:14.038265 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.038415 kubelet[2813]: W1105 15:54:14.038278 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.038415 kubelet[2813]: E1105 15:54:14.038296 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.038868 kubelet[2813]: E1105 15:54:14.038546 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.038868 kubelet[2813]: W1105 15:54:14.038557 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.038868 kubelet[2813]: E1105 15:54:14.038567 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.038868 kubelet[2813]: E1105 15:54:14.038869 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.038868 kubelet[2813]: W1105 15:54:14.038881 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.039452 kubelet[2813]: E1105 15:54:14.038893 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.039452 kubelet[2813]: E1105 15:54:14.039214 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.039452 kubelet[2813]: W1105 15:54:14.039229 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.039452 kubelet[2813]: E1105 15:54:14.039245 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.039592 kubelet[2813]: E1105 15:54:14.039487 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.039592 kubelet[2813]: W1105 15:54:14.039498 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.039592 kubelet[2813]: E1105 15:54:14.039510 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.039908 kubelet[2813]: E1105 15:54:14.039755 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.039908 kubelet[2813]: W1105 15:54:14.039773 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.039908 kubelet[2813]: E1105 15:54:14.039785 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.054202 containerd[1641]: time="2025-11-05T15:54:14.054144120Z" level=info msg="connecting to shim 53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55" address="unix:///run/containerd/s/bf4766203f149a79cc9a18c3a1a0ef9056e0ad7d0aecd4ca7bded05f902f0488" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:14.072800 kubelet[2813]: E1105 15:54:14.072731 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:14.074037 containerd[1641]: time="2025-11-05T15:54:14.073950708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmv2p,Uid:7a1db646-8f37-407f-8226-cb551f93fc27,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:14.081474 kubelet[2813]: E1105 15:54:14.081417 2813 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:54:14.081474 kubelet[2813]: W1105 15:54:14.081452 2813 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:54:14.081726 kubelet[2813]: E1105 15:54:14.081486 2813 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:54:14.109866 systemd[1]: Started cri-containerd-53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55.scope - libcontainer container 53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55. Nov 5 15:54:14.146186 containerd[1641]: time="2025-11-05T15:54:14.145983332Z" level=info msg="connecting to shim 2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd" address="unix:///run/containerd/s/445ce264b403ab9fc327d51ac4da22a2c7d8ae72daca153b015963eff8f2bb57" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:54:14.199227 systemd[1]: Started cri-containerd-2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd.scope - libcontainer container 2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd. Nov 5 15:54:14.324668 containerd[1641]: time="2025-11-05T15:54:14.323645376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5bdc44466b-w6bvh,Uid:4cfef64a-c898-47be-a444-83817f908681,Namespace:calico-system,Attempt:0,} returns sandbox id \"53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55\"" Nov 5 15:54:14.325352 containerd[1641]: time="2025-11-05T15:54:14.325301812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pmv2p,Uid:7a1db646-8f37-407f-8226-cb551f93fc27,Namespace:calico-system,Attempt:0,} returns sandbox id \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\"" Nov 5 15:54:14.326472 kubelet[2813]: E1105 15:54:14.326192 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:14.328542 kubelet[2813]: E1105 15:54:14.327516 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:14.329263 containerd[1641]: time="2025-11-05T15:54:14.329211904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:54:15.554311 kubelet[2813]: E1105 15:54:15.551017 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:15.961905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2416412172.mount: Deactivated successfully. Nov 5 15:54:16.042558 containerd[1641]: time="2025-11-05T15:54:16.042462212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:16.044546 containerd[1641]: time="2025-11-05T15:54:16.044508268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5941492" Nov 5 15:54:16.046094 containerd[1641]: time="2025-11-05T15:54:16.046054298Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:16.049087 containerd[1641]: time="2025-11-05T15:54:16.049027663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:16.049898 containerd[1641]: time="2025-11-05T15:54:16.049838553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.720583669s" Nov 5 15:54:16.049898 containerd[1641]: time="2025-11-05T15:54:16.049892935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Nov 5 15:54:16.051053 containerd[1641]: time="2025-11-05T15:54:16.051021582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:54:16.055478 containerd[1641]: time="2025-11-05T15:54:16.055426421Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:54:16.077404 containerd[1641]: time="2025-11-05T15:54:16.077327387Z" level=info msg="Container c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:16.081291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059157728.mount: Deactivated successfully. Nov 5 15:54:16.093892 containerd[1641]: time="2025-11-05T15:54:16.093829059Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\"" Nov 5 15:54:16.094549 containerd[1641]: time="2025-11-05T15:54:16.094514024Z" level=info msg="StartContainer for \"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\"" Nov 5 15:54:16.096265 containerd[1641]: time="2025-11-05T15:54:16.096235571Z" level=info msg="connecting to shim c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3" address="unix:///run/containerd/s/445ce264b403ab9fc327d51ac4da22a2c7d8ae72daca153b015963eff8f2bb57" protocol=ttrpc version=3 Nov 5 15:54:16.127115 systemd[1]: Started cri-containerd-c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3.scope - libcontainer container c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3. Nov 5 15:54:16.195436 containerd[1641]: time="2025-11-05T15:54:16.195378012Z" level=info msg="StartContainer for \"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\" returns successfully" Nov 5 15:54:16.216825 systemd[1]: cri-containerd-c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3.scope: Deactivated successfully. Nov 5 15:54:16.220802 containerd[1641]: time="2025-11-05T15:54:16.220750417Z" level=info msg="received exit event container_id:\"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\" id:\"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\" pid:3385 exited_at:{seconds:1762358056 nanos:219306539}" Nov 5 15:54:16.220958 containerd[1641]: time="2025-11-05T15:54:16.220888656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\" id:\"c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3\" pid:3385 exited_at:{seconds:1762358056 nanos:219306539}" Nov 5 15:54:16.677213 kubelet[2813]: E1105 15:54:16.677170 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:16.938411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9a84aac30a8a7dc01e766c24a2ce2d64674c094be93a82a141c2a05daeadbb3-rootfs.mount: Deactivated successfully. Nov 5 15:54:17.550813 kubelet[2813]: E1105 15:54:17.550723 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:19.550520 kubelet[2813]: E1105 15:54:19.550452 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:19.676944 containerd[1641]: time="2025-11-05T15:54:19.676849325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:19.742139 containerd[1641]: time="2025-11-05T15:54:19.742044535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33739890" Nov 5 15:54:19.816119 containerd[1641]: time="2025-11-05T15:54:19.815967991Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:19.870001 containerd[1641]: time="2025-11-05T15:54:19.869884371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:19.870677 containerd[1641]: time="2025-11-05T15:54:19.870651048Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.819591977s" Nov 5 15:54:19.870718 containerd[1641]: time="2025-11-05T15:54:19.870677769Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Nov 5 15:54:19.871579 containerd[1641]: time="2025-11-05T15:54:19.871549614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:54:19.954903 containerd[1641]: time="2025-11-05T15:54:19.954843220Z" level=info msg="CreateContainer within sandbox \"53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:54:20.008726 containerd[1641]: time="2025-11-05T15:54:20.008650887Z" level=info msg="Container 46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:20.029120 containerd[1641]: time="2025-11-05T15:54:20.028954249Z" level=info msg="CreateContainer within sandbox \"53ea775c02f59ced5cf86ccff029c2bcfed83bbf3860135afbec631b773b0a55\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5\"" Nov 5 15:54:20.029703 containerd[1641]: time="2025-11-05T15:54:20.029676823Z" level=info msg="StartContainer for \"46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5\"" Nov 5 15:54:20.031229 containerd[1641]: time="2025-11-05T15:54:20.031184510Z" level=info msg="connecting to shim 46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5" address="unix:///run/containerd/s/bf4766203f149a79cc9a18c3a1a0ef9056e0ad7d0aecd4ca7bded05f902f0488" protocol=ttrpc version=3 Nov 5 15:54:20.063249 systemd[1]: Started cri-containerd-46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5.scope - libcontainer container 46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5. Nov 5 15:54:20.149570 containerd[1641]: time="2025-11-05T15:54:20.149523673Z" level=info msg="StartContainer for \"46bd888aabac1b92141e8cd02c30057acb1028ae52448221fe8b097d820ab9b5\" returns successfully" Nov 5 15:54:20.690462 kubelet[2813]: E1105 15:54:20.690394 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:20.728856 kubelet[2813]: I1105 15:54:20.728631 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5bdc44466b-w6bvh" podStartSLOduration=2.186850577 podStartE2EDuration="7.728604713s" podCreationTimestamp="2025-11-05 15:54:13 +0000 UTC" firstStartedPulling="2025-11-05 15:54:14.329668229 +0000 UTC m=+31.922543958" lastFinishedPulling="2025-11-05 15:54:19.871422365 +0000 UTC m=+37.464298094" observedRunningTime="2025-11-05 15:54:20.710490667 +0000 UTC m=+38.303366426" watchObservedRunningTime="2025-11-05 15:54:20.728604713 +0000 UTC m=+38.321480432" Nov 5 15:54:21.550576 kubelet[2813]: E1105 15:54:21.550461 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:21.692397 kubelet[2813]: E1105 15:54:21.692343 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:22.693353 kubelet[2813]: E1105 15:54:22.693298 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:23.550335 kubelet[2813]: E1105 15:54:23.550243 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:25.374446 containerd[1641]: time="2025-11-05T15:54:25.374346649Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:25.377066 containerd[1641]: time="2025-11-05T15:54:25.377035267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Nov 5 15:54:25.396961 containerd[1641]: time="2025-11-05T15:54:25.396771771Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:25.402228 containerd[1641]: time="2025-11-05T15:54:25.400424032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:25.402228 containerd[1641]: time="2025-11-05T15:54:25.401051475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 5.529469611s" Nov 5 15:54:25.402228 containerd[1641]: time="2025-11-05T15:54:25.401082525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Nov 5 15:54:25.417271 containerd[1641]: time="2025-11-05T15:54:25.417198771Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:54:25.550542 kubelet[2813]: E1105 15:54:25.550457 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:25.578186 containerd[1641]: time="2025-11-05T15:54:25.578109808Z" level=info msg="Container 6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:25.719645 containerd[1641]: time="2025-11-05T15:54:25.719491760Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\"" Nov 5 15:54:25.720056 containerd[1641]: time="2025-11-05T15:54:25.719970656Z" level=info msg="StartContainer for \"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\"" Nov 5 15:54:25.721939 containerd[1641]: time="2025-11-05T15:54:25.721889745Z" level=info msg="connecting to shim 6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32" address="unix:///run/containerd/s/445ce264b403ab9fc327d51ac4da22a2c7d8ae72daca153b015963eff8f2bb57" protocol=ttrpc version=3 Nov 5 15:54:25.748221 systemd[1]: Started cri-containerd-6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32.scope - libcontainer container 6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32. Nov 5 15:54:25.980521 containerd[1641]: time="2025-11-05T15:54:25.980330067Z" level=info msg="StartContainer for \"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\" returns successfully" Nov 5 15:54:26.705734 kubelet[2813]: E1105 15:54:26.705675 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:27.550310 kubelet[2813]: E1105 15:54:27.550236 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:27.707393 kubelet[2813]: E1105 15:54:27.707341 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:28.810939 systemd[1]: cri-containerd-6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32.scope: Deactivated successfully. Nov 5 15:54:28.811755 systemd[1]: cri-containerd-6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32.scope: Consumed 681ms CPU time, 181.7M memory peak, 1.1M read from disk, 171.3M written to disk. Nov 5 15:54:28.812572 containerd[1641]: time="2025-11-05T15:54:28.812522813Z" level=info msg="received exit event container_id:\"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\" id:\"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\" pid:3495 exited_at:{seconds:1762358068 nanos:811697060}" Nov 5 15:54:28.813004 containerd[1641]: time="2025-11-05T15:54:28.812605842Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\" id:\"6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32\" pid:3495 exited_at:{seconds:1762358068 nanos:811697060}" Nov 5 15:54:28.840376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6187314ab29106ac2a537cf7622a8d78bc9119ae0f5563d84724970bc0df1c32-rootfs.mount: Deactivated successfully. Nov 5 15:54:28.902084 kubelet[2813]: I1105 15:54:28.902027 2813 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 5 15:54:29.556129 systemd[1]: Created slice kubepods-besteffort-pod5cbe1702_972a_4f84_9d2f_51b96b54edda.slice - libcontainer container kubepods-besteffort-pod5cbe1702_972a_4f84_9d2f_51b96b54edda.slice. Nov 5 15:54:31.624691 containerd[1641]: time="2025-11-05T15:54:31.624636983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:31.878019 kubelet[2813]: E1105 15:54:31.877866 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:31.879598 containerd[1641]: time="2025-11-05T15:54:31.879559675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:54:32.100736 systemd[1]: Created slice kubepods-besteffort-podbc1f133a_26eb_43d7_9fdb_a3e47afd9653.slice - libcontainer container kubepods-besteffort-podbc1f133a_26eb_43d7_9fdb_a3e47afd9653.slice. Nov 5 15:54:32.130846 containerd[1641]: time="2025-11-05T15:54:32.130676762Z" level=error msg="Failed to destroy network for sandbox \"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:32.133206 systemd[1]: run-netns-cni\x2ddaedaca3\x2dacce\x2d4b4b\x2d954f\x2da7912bee57ee.mount: Deactivated successfully. Nov 5 15:54:32.168311 systemd[1]: Created slice kubepods-besteffort-podb05fd954_e904_4df9_a183_93526853dbb1.slice - libcontainer container kubepods-besteffort-podb05fd954_e904_4df9_a183_93526853dbb1.slice. Nov 5 15:54:32.173571 kubelet[2813]: I1105 15:54:32.173524 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bc1f133a-26eb-43d7-9fdb-a3e47afd9653-calico-apiserver-certs\") pod \"calico-apiserver-d76b985b9-kbchr\" (UID: \"bc1f133a-26eb-43d7-9fdb-a3e47afd9653\") " pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:32.173571 kubelet[2813]: I1105 15:54:32.173574 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncm9n\" (UniqueName: \"kubernetes.io/projected/bc1f133a-26eb-43d7-9fdb-a3e47afd9653-kube-api-access-ncm9n\") pod \"calico-apiserver-d76b985b9-kbchr\" (UID: \"bc1f133a-26eb-43d7-9fdb-a3e47afd9653\") " pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:32.235476 containerd[1641]: time="2025-11-05T15:54:32.235348154Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:32.235822 kubelet[2813]: E1105 15:54:32.235762 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:32.235944 kubelet[2813]: E1105 15:54:32.235843 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:32.235944 kubelet[2813]: E1105 15:54:32.235864 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:32.236040 kubelet[2813]: E1105 15:54:32.235991 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8cd0b39d8e614b1f846c8f33360e551d8b05b28c583c807594e98c0c66c93c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:32.244332 systemd[1]: Created slice kubepods-besteffort-pode369c643_3d7c_424a_939d_fd5462f1f671.slice - libcontainer container kubepods-besteffort-pode369c643_3d7c_424a_939d_fd5462f1f671.slice. Nov 5 15:54:32.274963 kubelet[2813]: I1105 15:54:32.274827 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e369c643-3d7c-424a-939d-fd5462f1f671-calico-apiserver-certs\") pod \"calico-apiserver-d76b985b9-z9rht\" (UID: \"e369c643-3d7c-424a-939d-fd5462f1f671\") " pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:32.274963 kubelet[2813]: I1105 15:54:32.274898 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b05fd954-e904-4df9-a183-93526853dbb1-tigera-ca-bundle\") pod \"calico-kube-controllers-757d4c4c4d-gc5kt\" (UID: \"b05fd954-e904-4df9-a183-93526853dbb1\") " pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:32.274963 kubelet[2813]: I1105 15:54:32.274916 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsxpd\" (UniqueName: \"kubernetes.io/projected/b05fd954-e904-4df9-a183-93526853dbb1-kube-api-access-rsxpd\") pod \"calico-kube-controllers-757d4c4c4d-gc5kt\" (UID: \"b05fd954-e904-4df9-a183-93526853dbb1\") " pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:32.275202 kubelet[2813]: I1105 15:54:32.274991 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcsxs\" (UniqueName: \"kubernetes.io/projected/e369c643-3d7c-424a-939d-fd5462f1f671-kube-api-access-kcsxs\") pod \"calico-apiserver-d76b985b9-z9rht\" (UID: \"e369c643-3d7c-424a-939d-fd5462f1f671\") " pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:32.305900 systemd[1]: Created slice kubepods-burstable-pode9514e0b_1fb1_4b7f_898a_2d78ba283593.slice - libcontainer container kubepods-burstable-pode9514e0b_1fb1_4b7f_898a_2d78ba283593.slice. Nov 5 15:54:32.330379 systemd[1]: Created slice kubepods-besteffort-pod48c2a4a5_482d_4600_8d80_4c89933cceaa.slice - libcontainer container kubepods-besteffort-pod48c2a4a5_482d_4600_8d80_4c89933cceaa.slice. Nov 5 15:54:32.345208 systemd[1]: Created slice kubepods-burstable-podb599e586_f36d_4082_a717_ffeb6bad40b3.slice - libcontainer container kubepods-burstable-podb599e586_f36d_4082_a717_ffeb6bad40b3.slice. Nov 5 15:54:32.353460 systemd[1]: Created slice kubepods-besteffort-pod668672a8_25ac_4baa_8f03_69b94b894d13.slice - libcontainer container kubepods-besteffort-pod668672a8_25ac_4baa_8f03_69b94b894d13.slice. Nov 5 15:54:32.376227 kubelet[2813]: I1105 15:54:32.375857 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/48c2a4a5-482d-4600-8d80-4c89933cceaa-goldmane-key-pair\") pod \"goldmane-7c778bb748-mswwg\" (UID: \"48c2a4a5-482d-4600-8d80-4c89933cceaa\") " pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:32.376227 kubelet[2813]: I1105 15:54:32.375958 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48c2a4a5-482d-4600-8d80-4c89933cceaa-config\") pod \"goldmane-7c778bb748-mswwg\" (UID: \"48c2a4a5-482d-4600-8d80-4c89933cceaa\") " pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:32.376227 kubelet[2813]: I1105 15:54:32.375983 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x76fc\" (UniqueName: \"kubernetes.io/projected/48c2a4a5-482d-4600-8d80-4c89933cceaa-kube-api-access-x76fc\") pod \"goldmane-7c778bb748-mswwg\" (UID: \"48c2a4a5-482d-4600-8d80-4c89933cceaa\") " pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:32.376227 kubelet[2813]: I1105 15:54:32.376008 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ztl\" (UniqueName: \"kubernetes.io/projected/e9514e0b-1fb1-4b7f-898a-2d78ba283593-kube-api-access-z2ztl\") pod \"coredns-66bc5c9577-ch7wn\" (UID: \"e9514e0b-1fb1-4b7f-898a-2d78ba283593\") " pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:32.376227 kubelet[2813]: I1105 15:54:32.376032 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b599e586-f36d-4082-a717-ffeb6bad40b3-config-volume\") pod \"coredns-66bc5c9577-sdjz8\" (UID: \"b599e586-f36d-4082-a717-ffeb6bad40b3\") " pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:32.376627 kubelet[2813]: I1105 15:54:32.376067 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/48c2a4a5-482d-4600-8d80-4c89933cceaa-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-mswwg\" (UID: \"48c2a4a5-482d-4600-8d80-4c89933cceaa\") " pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:32.376627 kubelet[2813]: I1105 15:54:32.376088 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e9514e0b-1fb1-4b7f-898a-2d78ba283593-config-volume\") pod \"coredns-66bc5c9577-ch7wn\" (UID: \"e9514e0b-1fb1-4b7f-898a-2d78ba283593\") " pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:32.376627 kubelet[2813]: I1105 15:54:32.376106 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqw95\" (UniqueName: \"kubernetes.io/projected/b599e586-f36d-4082-a717-ffeb6bad40b3-kube-api-access-gqw95\") pod \"coredns-66bc5c9577-sdjz8\" (UID: \"b599e586-f36d-4082-a717-ffeb6bad40b3\") " pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:32.438692 containerd[1641]: time="2025-11-05T15:54:32.438537102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:54:32.476825 kubelet[2813]: I1105 15:54:32.476751 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-backend-key-pair\") pod \"whisker-7989cb6cd9-jpltg\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:32.477392 kubelet[2813]: I1105 15:54:32.477360 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-ca-bundle\") pod \"whisker-7989cb6cd9-jpltg\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:32.478065 kubelet[2813]: I1105 15:54:32.477470 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9vjl\" (UniqueName: \"kubernetes.io/projected/668672a8-25ac-4baa-8f03-69b94b894d13-kube-api-access-z9vjl\") pod \"whisker-7989cb6cd9-jpltg\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:32.601545 containerd[1641]: time="2025-11-05T15:54:32.601476468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:32.669661 containerd[1641]: time="2025-11-05T15:54:32.669580048Z" level=error msg="Failed to destroy network for sandbox \"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:32.742942 containerd[1641]: time="2025-11-05T15:54:32.742750420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:54:33.091183 containerd[1641]: time="2025-11-05T15:54:33.091069161Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.091659 kubelet[2813]: E1105 15:54:33.091551 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.091659 kubelet[2813]: E1105 15:54:33.091645 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:33.092296 kubelet[2813]: E1105 15:54:33.091680 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:33.092296 kubelet[2813]: E1105 15:54:33.091755 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46f89784290eec8d629715a73987f338ee42597cd6e6ba5ec7828773a43bbfa2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:54:33.289817 containerd[1641]: time="2025-11-05T15:54:33.289743424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:33.342042 containerd[1641]: time="2025-11-05T15:54:33.341882927Z" level=error msg="Failed to destroy network for sandbox \"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.356789 kubelet[2813]: E1105 15:54:33.356303 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:33.357352 containerd[1641]: time="2025-11-05T15:54:33.357292952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:33.420933 containerd[1641]: time="2025-11-05T15:54:33.420847542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7989cb6cd9-jpltg,Uid:668672a8-25ac-4baa-8f03-69b94b894d13,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:33.433167 containerd[1641]: time="2025-11-05T15:54:33.433089177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.433531 kubelet[2813]: E1105 15:54:33.433474 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.433690 kubelet[2813]: E1105 15:54:33.433564 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:33.433690 kubelet[2813]: E1105 15:54:33.433594 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:33.433794 containerd[1641]: time="2025-11-05T15:54:33.433761921Z" level=error msg="Failed to destroy network for sandbox \"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.434010 kubelet[2813]: E1105 15:54:33.433949 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a893b4d70b72a542be19fd099f0b5b56e4f55860a7b1078083fe96544f09c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:54:33.530970 kubelet[2813]: E1105 15:54:33.530304 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:33.536693 containerd[1641]: time="2025-11-05T15:54:33.536378580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:33.545622 containerd[1641]: time="2025-11-05T15:54:33.545370651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.545980 kubelet[2813]: E1105 15:54:33.545883 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.546093 kubelet[2813]: E1105 15:54:33.546003 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:33.546093 kubelet[2813]: E1105 15:54:33.546048 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:33.546184 kubelet[2813]: E1105 15:54:33.546151 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64629c097c0e29becffa1b5e255f6de089d8f3214aa12c5707d22d41648ea3ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:54:33.629190 containerd[1641]: time="2025-11-05T15:54:33.628906290Z" level=error msg="Failed to destroy network for sandbox \"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.651112 containerd[1641]: time="2025-11-05T15:54:33.651037412Z" level=error msg="Failed to destroy network for sandbox \"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.670036 containerd[1641]: time="2025-11-05T15:54:33.668732239Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.670984 kubelet[2813]: E1105 15:54:33.670876 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.671166 kubelet[2813]: E1105 15:54:33.671003 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:33.671166 kubelet[2813]: E1105 15:54:33.671036 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:33.671166 kubelet[2813]: E1105 15:54:33.671113 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"186fff7170106c23d8970b29d5f820cfc258a6151dd093c8c52d27833a0aeb1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:54:33.671313 containerd[1641]: time="2025-11-05T15:54:33.671003112Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.671760 kubelet[2813]: E1105 15:54:33.671705 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.671848 kubelet[2813]: E1105 15:54:33.671821 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:33.673303 kubelet[2813]: E1105 15:54:33.671864 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:33.673303 kubelet[2813]: E1105 15:54:33.671990 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sdjz8_kube-system(b599e586-f36d-4082-a717-ffeb6bad40b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sdjz8_kube-system(b599e586-f36d-4082-a717-ffeb6bad40b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33677e4a57a085734008f2d75eaed513cf5a1bf1982fcbe34eb73d22b75bef56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sdjz8" podUID="b599e586-f36d-4082-a717-ffeb6bad40b3" Nov 5 15:54:33.695531 containerd[1641]: time="2025-11-05T15:54:33.695430918Z" level=error msg="Failed to destroy network for sandbox \"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.703044 containerd[1641]: time="2025-11-05T15:54:33.702981559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7989cb6cd9-jpltg,Uid:668672a8-25ac-4baa-8f03-69b94b894d13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.703326 kubelet[2813]: E1105 15:54:33.703260 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.703998 kubelet[2813]: E1105 15:54:33.703967 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:33.704081 kubelet[2813]: E1105 15:54:33.704004 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:33.704120 kubelet[2813]: E1105 15:54:33.704079 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7989cb6cd9-jpltg_calico-system(668672a8-25ac-4baa-8f03-69b94b894d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7989cb6cd9-jpltg_calico-system(668672a8-25ac-4baa-8f03-69b94b894d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c006d9e1fdf18feda8bbcbafef4ba78ec4bc1181af85cace15416c55a50dabfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7989cb6cd9-jpltg" podUID="668672a8-25ac-4baa-8f03-69b94b894d13" Nov 5 15:54:33.736336 containerd[1641]: time="2025-11-05T15:54:33.736257859Z" level=error msg="Failed to destroy network for sandbox \"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.747644 containerd[1641]: time="2025-11-05T15:54:33.747556123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.747956 kubelet[2813]: E1105 15:54:33.747871 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:33.748051 kubelet[2813]: E1105 15:54:33.747963 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:33.748051 kubelet[2813]: E1105 15:54:33.747988 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:33.748117 kubelet[2813]: E1105 15:54:33.748070 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ch7wn_kube-system(e9514e0b-1fb1-4b7f-898a-2d78ba283593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ch7wn_kube-system(e9514e0b-1fb1-4b7f-898a-2d78ba283593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc111e7403c1d4edfbca197be15c935e07ea1ddaa5bfb0c096b31e5989f30ca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ch7wn" podUID="e9514e0b-1fb1-4b7f-898a-2d78ba283593" Nov 5 15:54:33.877775 systemd[1]: run-netns-cni\x2d43b875b1\x2dbab9\x2d3042\x2dac41\x2db4189eeadff5.mount: Deactivated successfully. Nov 5 15:54:33.877908 systemd[1]: run-netns-cni\x2d3ca8aed8\x2d5579\x2d3de9\x2d3d30\x2daa52f7397ac9.mount: Deactivated successfully. Nov 5 15:54:47.988592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707427563.mount: Deactivated successfully. Nov 5 15:54:49.934722 kubelet[2813]: E1105 15:54:49.934628 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:49.935457 containerd[1641]: time="2025-11-05T15:54:49.935202409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:50.395574 kubelet[2813]: E1105 15:54:50.395527 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:50.396134 containerd[1641]: time="2025-11-05T15:54:50.396088648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,}" Nov 5 15:54:50.633144 containerd[1641]: time="2025-11-05T15:54:50.633066625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7989cb6cd9-jpltg,Uid:668672a8-25ac-4baa-8f03-69b94b894d13,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:50.861996 containerd[1641]: time="2025-11-05T15:54:50.860479276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:54:51.219949 containerd[1641]: time="2025-11-05T15:54:51.219800622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:51.380542 containerd[1641]: time="2025-11-05T15:54:51.380466030Z" level=error msg="Failed to destroy network for sandbox \"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:51.382733 systemd[1]: run-netns-cni\x2d5ff90c3a\x2d9d72\x2d1ea8\x2de8b7\x2d500032bc536b.mount: Deactivated successfully. Nov 5 15:54:51.577496 containerd[1641]: time="2025-11-05T15:54:51.577378166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:51.827085 containerd[1641]: time="2025-11-05T15:54:51.827028718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,}" Nov 5 15:54:52.236655 containerd[1641]: time="2025-11-05T15:54:52.236585410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:54:52.352334 containerd[1641]: time="2025-11-05T15:54:52.352254045Z" level=error msg="Failed to destroy network for sandbox \"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:52.354673 systemd[1]: run-netns-cni\x2d7c124038\x2dd6bf\x2d4853\x2d93f5\x2d3f8210f59ce3.mount: Deactivated successfully. Nov 5 15:54:52.358256 containerd[1641]: time="2025-11-05T15:54:52.358178846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:52.358562 kubelet[2813]: E1105 15:54:52.358505 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:52.358992 kubelet[2813]: E1105 15:54:52.358589 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:52.358992 kubelet[2813]: E1105 15:54:52.358632 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-ch7wn" Nov 5 15:54:52.358992 kubelet[2813]: E1105 15:54:52.358707 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-ch7wn_kube-system(e9514e0b-1fb1-4b7f-898a-2d78ba283593)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-ch7wn_kube-system(e9514e0b-1fb1-4b7f-898a-2d78ba283593)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6314f47ed56646d7eee9ef0259ae38fba82ee2f68037bebf4dd8d5866fa793aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-ch7wn" podUID="e9514e0b-1fb1-4b7f-898a-2d78ba283593" Nov 5 15:54:52.785440 containerd[1641]: time="2025-11-05T15:54:52.785362237Z" level=error msg="Failed to destroy network for sandbox \"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:52.788202 systemd[1]: run-netns-cni\x2dbb92cf9c\x2d2aa6\x2d7755\x2d6442\x2d74e9cbfa6462.mount: Deactivated successfully. Nov 5 15:54:52.951474 containerd[1641]: time="2025-11-05T15:54:52.951394241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:53.195024 containerd[1641]: time="2025-11-05T15:54:53.194962538Z" level=error msg="Failed to destroy network for sandbox \"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.197565 systemd[1]: run-netns-cni\x2d65fb8baf\x2d2000\x2d9815\x2df511\x2dac5777b58cc6.mount: Deactivated successfully. Nov 5 15:54:53.353219 containerd[1641]: time="2025-11-05T15:54:53.353145672Z" level=error msg="Failed to destroy network for sandbox \"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.495787 containerd[1641]: time="2025-11-05T15:54:53.495574921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.496114 kubelet[2813]: E1105 15:54:53.496002 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.496602 kubelet[2813]: E1105 15:54:53.496134 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:53.496602 kubelet[2813]: E1105 15:54:53.496161 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-sdjz8" Nov 5 15:54:53.496602 kubelet[2813]: E1105 15:54:53.496249 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-sdjz8_kube-system(b599e586-f36d-4082-a717-ffeb6bad40b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-sdjz8_kube-system(b599e586-f36d-4082-a717-ffeb6bad40b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"00e22c87db4c2e9f850350020b7555d89f9169b59759707fd20da86ce21e1a91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-sdjz8" podUID="b599e586-f36d-4082-a717-ffeb6bad40b3" Nov 5 15:54:53.737497 containerd[1641]: time="2025-11-05T15:54:53.737437189Z" level=error msg="Failed to destroy network for sandbox \"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.768441 systemd[1]: run-netns-cni\x2d367897a2\x2d3ce2\x2d1d39\x2dda5d\x2dedd8a1ecb3e5.mount: Deactivated successfully. Nov 5 15:54:53.768569 systemd[1]: run-netns-cni\x2d88515525\x2dfc83\x2d7037\x2d78c9\x2dd57fb9aee8e1.mount: Deactivated successfully. Nov 5 15:54:53.827951 containerd[1641]: time="2025-11-05T15:54:53.827842404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7989cb6cd9-jpltg,Uid:668672a8-25ac-4baa-8f03-69b94b894d13,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.828229 kubelet[2813]: E1105 15:54:53.828170 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:53.828300 kubelet[2813]: E1105 15:54:53.828244 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:53.828300 kubelet[2813]: E1105 15:54:53.828273 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7989cb6cd9-jpltg" Nov 5 15:54:53.828389 kubelet[2813]: E1105 15:54:53.828346 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7989cb6cd9-jpltg_calico-system(668672a8-25ac-4baa-8f03-69b94b894d13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7989cb6cd9-jpltg_calico-system(668672a8-25ac-4baa-8f03-69b94b894d13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8ed097b1132b73657f2518cf0eb6361790c5f0131064426524f2fbebbcdf60f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7989cb6cd9-jpltg" podUID="668672a8-25ac-4baa-8f03-69b94b894d13" Nov 5 15:54:54.038515 containerd[1641]: time="2025-11-05T15:54:54.038351955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Nov 5 15:54:54.717874 containerd[1641]: time="2025-11-05T15:54:54.716579365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.718651 kubelet[2813]: E1105 15:54:54.716878 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.718651 kubelet[2813]: E1105 15:54:54.717113 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:54.718651 kubelet[2813]: E1105 15:54:54.717208 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" Nov 5 15:54:54.719255 kubelet[2813]: E1105 15:54:54.717375 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"880e049440748995681a976135ca979e751b2b2557a8d85b2f0fce016d42e78d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:54:54.724522 containerd[1641]: time="2025-11-05T15:54:54.724453507Z" level=error msg="Failed to destroy network for sandbox \"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.726657 systemd[1]: run-netns-cni\x2d9a88aba0\x2d9813\x2db2d3\x2da031\x2df7b6e2b20dc6.mount: Deactivated successfully. Nov 5 15:54:54.751247 containerd[1641]: time="2025-11-05T15:54:54.751174968Z" level=error msg="Failed to destroy network for sandbox \"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.753482 systemd[1]: run-netns-cni\x2dfe6c51a3\x2ddaf5\x2da028\x2d8007\x2d332a3dbb4278.mount: Deactivated successfully. Nov 5 15:54:54.791996 containerd[1641]: time="2025-11-05T15:54:54.791901655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.792323 kubelet[2813]: E1105 15:54:54.792233 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.792380 kubelet[2813]: E1105 15:54:54.792325 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:54.792380 kubelet[2813]: E1105 15:54:54.792352 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-gf82q" Nov 5 15:54:54.792462 kubelet[2813]: E1105 15:54:54.792424 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"359cb39d8b200ea0dd882729b6a17e31fd3e86055d442f100e972d75a72e4fe7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:54:54.816682 containerd[1641]: time="2025-11-05T15:54:54.816605977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.816855 kubelet[2813]: E1105 15:54:54.816797 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:54.816894 kubelet[2813]: E1105 15:54:54.816861 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:54.816894 kubelet[2813]: E1105 15:54:54.816877 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-mswwg" Nov 5 15:54:54.816973 kubelet[2813]: E1105 15:54:54.816948 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"587a94d5239fb5b302cdf18c5348ccf97ced900c407f6a237974d7c1ca16d50d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:54:54.848773 containerd[1641]: time="2025-11-05T15:54:54.848700981Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:55.021859 containerd[1641]: time="2025-11-05T15:54:55.021679338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:55.022548 kubelet[2813]: E1105 15:54:55.022508 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:55.022621 kubelet[2813]: E1105 15:54:55.022572 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:55.022621 kubelet[2813]: E1105 15:54:55.022591 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" Nov 5 15:54:55.022686 kubelet[2813]: E1105 15:54:55.022666 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d6a086ebd483a38777e33238c014a5dac43d93aa6770cffe0b86c167b7d902d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:54:55.120490 containerd[1641]: time="2025-11-05T15:54:55.120392347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:55.120932 kubelet[2813]: E1105 15:54:55.120840 2813 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:54:55.120932 kubelet[2813]: E1105 15:54:55.120915 2813 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:55.121148 kubelet[2813]: E1105 15:54:55.120967 2813 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" Nov 5 15:54:55.121148 kubelet[2813]: E1105 15:54:55.121047 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0322225500733e68c34df392919be39be5fe6bdfc861b7667084b7fb6f76521\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:54:55.137667 containerd[1641]: time="2025-11-05T15:54:55.137605788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:54:55.138452 containerd[1641]: time="2025-11-05T15:54:55.138301010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 23.258699866s" Nov 5 15:54:55.138679 containerd[1641]: time="2025-11-05T15:54:55.138509376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Nov 5 15:54:55.187694 containerd[1641]: time="2025-11-05T15:54:55.187627818Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:54:55.303339 containerd[1641]: time="2025-11-05T15:54:55.302836632Z" level=info msg="Container cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:54:55.483787 containerd[1641]: time="2025-11-05T15:54:55.483729746Z" level=info msg="CreateContainer within sandbox \"2fb3f202c4cf0ca13def5f9af0a521e7e5eaf7043f38862c53812572ce2b70dd\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\"" Nov 5 15:54:55.484493 containerd[1641]: time="2025-11-05T15:54:55.484455797Z" level=info msg="StartContainer for \"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\"" Nov 5 15:54:55.486352 containerd[1641]: time="2025-11-05T15:54:55.486322037Z" level=info msg="connecting to shim cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d" address="unix:///run/containerd/s/445ce264b403ab9fc327d51ac4da22a2c7d8ae72daca153b015963eff8f2bb57" protocol=ttrpc version=3 Nov 5 15:54:55.517235 systemd[1]: Started cri-containerd-cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d.scope - libcontainer container cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d. Nov 5 15:54:55.566757 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:60538.service - OpenSSH per-connection server daemon (10.0.0.1:60538). Nov 5 15:54:55.675158 containerd[1641]: time="2025-11-05T15:54:55.675101090Z" level=info msg="StartContainer for \"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\" returns successfully" Nov 5 15:54:55.682372 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:54:55.682485 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:54:55.685300 sshd[4088]: Accepted publickey for core from 10.0.0.1 port 60538 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:54:55.687822 sshd-session[4088]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:54:55.706174 systemd-logind[1620]: New session 8 of user core. Nov 5 15:54:55.716097 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:54:56.282411 sshd[4116]: Connection closed by 10.0.0.1 port 60538 Nov 5 15:54:56.282789 sshd-session[4088]: pam_unix(sshd:session): session closed for user core Nov 5 15:54:56.287536 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:60538.service: Deactivated successfully. Nov 5 15:54:56.289752 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:54:56.291043 systemd-logind[1620]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:54:56.293083 systemd-logind[1620]: Removed session 8. Nov 5 15:54:56.349386 kubelet[2813]: E1105 15:54:56.349011 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:56.488516 containerd[1641]: time="2025-11-05T15:54:56.488461713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\" id:\"de395aaaddd5ba874b3293a6ed4acc4b79e93c2991df81f7ab41ac611d559f36\" pid:4141 exit_status:1 exited_at:{seconds:1762358096 nanos:488054118}" Nov 5 15:54:57.351598 kubelet[2813]: E1105 15:54:57.351558 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:57.447165 containerd[1641]: time="2025-11-05T15:54:57.447104104Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\" id:\"db87ebe27b062fb89c127ed5da510807f6ceaf898f7a02d358dbdb4016f027f2\" pid:4168 exit_status:1 exited_at:{seconds:1762358097 nanos:446760561}" Nov 5 15:54:57.551094 kubelet[2813]: E1105 15:54:57.551034 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:54:57.811343 kubelet[2813]: I1105 15:54:57.810771 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pmv2p" podStartSLOduration=3.998943596 podStartE2EDuration="44.810750652s" podCreationTimestamp="2025-11-05 15:54:13 +0000 UTC" firstStartedPulling="2025-11-05 15:54:14.327762807 +0000 UTC m=+31.920638536" lastFinishedPulling="2025-11-05 15:54:55.139569873 +0000 UTC m=+72.732445592" observedRunningTime="2025-11-05 15:54:56.718494429 +0000 UTC m=+74.311370158" watchObservedRunningTime="2025-11-05 15:54:57.810750652 +0000 UTC m=+75.403626401" Nov 5 15:54:57.956841 kubelet[2813]: I1105 15:54:57.956776 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-ca-bundle\") pod \"668672a8-25ac-4baa-8f03-69b94b894d13\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " Nov 5 15:54:57.957058 kubelet[2813]: I1105 15:54:57.956858 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-backend-key-pair\") pod \"668672a8-25ac-4baa-8f03-69b94b894d13\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " Nov 5 15:54:57.957058 kubelet[2813]: I1105 15:54:57.956880 2813 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9vjl\" (UniqueName: \"kubernetes.io/projected/668672a8-25ac-4baa-8f03-69b94b894d13-kube-api-access-z9vjl\") pod \"668672a8-25ac-4baa-8f03-69b94b894d13\" (UID: \"668672a8-25ac-4baa-8f03-69b94b894d13\") " Nov 5 15:54:57.959082 kubelet[2813]: I1105 15:54:57.957674 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "668672a8-25ac-4baa-8f03-69b94b894d13" (UID: "668672a8-25ac-4baa-8f03-69b94b894d13"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:54:57.965251 kubelet[2813]: I1105 15:54:57.965177 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "668672a8-25ac-4baa-8f03-69b94b894d13" (UID: "668672a8-25ac-4baa-8f03-69b94b894d13"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:54:57.967542 kubelet[2813]: I1105 15:54:57.967495 2813 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/668672a8-25ac-4baa-8f03-69b94b894d13-kube-api-access-z9vjl" (OuterVolumeSpecName: "kube-api-access-z9vjl") pod "668672a8-25ac-4baa-8f03-69b94b894d13" (UID: "668672a8-25ac-4baa-8f03-69b94b894d13"). InnerVolumeSpecName "kube-api-access-z9vjl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:54:57.969403 systemd[1]: var-lib-kubelet-pods-668672a8\x2d25ac\x2d4baa\x2d8f03\x2d69b94b894d13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz9vjl.mount: Deactivated successfully. Nov 5 15:54:57.969585 systemd[1]: var-lib-kubelet-pods-668672a8\x2d25ac\x2d4baa\x2d8f03\x2d69b94b894d13-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:54:58.057489 kubelet[2813]: I1105 15:54:58.057435 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Nov 5 15:54:58.057489 kubelet[2813]: I1105 15:54:58.057475 2813 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z9vjl\" (UniqueName: \"kubernetes.io/projected/668672a8-25ac-4baa-8f03-69b94b894d13-kube-api-access-z9vjl\") on node \"localhost\" DevicePath \"\"" Nov 5 15:54:58.057489 kubelet[2813]: I1105 15:54:58.057485 2813 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/668672a8-25ac-4baa-8f03-69b94b894d13-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Nov 5 15:54:58.361698 systemd[1]: Removed slice kubepods-besteffort-pod668672a8_25ac_4baa_8f03_69b94b894d13.slice - libcontainer container kubepods-besteffort-pod668672a8_25ac_4baa_8f03_69b94b894d13.slice. Nov 5 15:54:59.204503 systemd[1]: Created slice kubepods-besteffort-pod4d556d16_c6b3_4ab7_996f_c53ed792f703.slice - libcontainer container kubepods-besteffort-pod4d556d16_c6b3_4ab7_996f_c53ed792f703.slice. Nov 5 15:54:59.266367 kubelet[2813]: I1105 15:54:59.266302 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d556d16-c6b3-4ab7-996f-c53ed792f703-whisker-backend-key-pair\") pod \"whisker-77b5df4b9c-nv52j\" (UID: \"4d556d16-c6b3-4ab7-996f-c53ed792f703\") " pod="calico-system/whisker-77b5df4b9c-nv52j" Nov 5 15:54:59.266367 kubelet[2813]: I1105 15:54:59.266352 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tx4g\" (UniqueName: \"kubernetes.io/projected/4d556d16-c6b3-4ab7-996f-c53ed792f703-kube-api-access-4tx4g\") pod \"whisker-77b5df4b9c-nv52j\" (UID: \"4d556d16-c6b3-4ab7-996f-c53ed792f703\") " pod="calico-system/whisker-77b5df4b9c-nv52j" Nov 5 15:54:59.266367 kubelet[2813]: I1105 15:54:59.266370 2813 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d556d16-c6b3-4ab7-996f-c53ed792f703-whisker-ca-bundle\") pod \"whisker-77b5df4b9c-nv52j\" (UID: \"4d556d16-c6b3-4ab7-996f-c53ed792f703\") " pod="calico-system/whisker-77b5df4b9c-nv52j" Nov 5 15:54:59.536843 containerd[1641]: time="2025-11-05T15:54:59.536438653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b5df4b9c-nv52j,Uid:4d556d16-c6b3-4ab7-996f-c53ed792f703,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:00.553027 kubelet[2813]: I1105 15:55:00.552975 2813 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="668672a8-25ac-4baa-8f03-69b94b894d13" path="/var/lib/kubelet/pods/668672a8-25ac-4baa-8f03-69b94b894d13/volumes" Nov 5 15:55:00.700678 systemd-networkd[1529]: vxlan.calico: Link UP Nov 5 15:55:00.700691 systemd-networkd[1529]: vxlan.calico: Gained carrier Nov 5 15:55:01.043726 systemd-networkd[1529]: cali445a2e936c7: Link UP Nov 5 15:55:01.044416 systemd-networkd[1529]: cali445a2e936c7: Gained carrier Nov 5 15:55:01.076262 containerd[1641]: 2025-11-05 15:54:59.664 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:55:01.076262 containerd[1641]: 2025-11-05 15:55:00.112 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--77b5df4b9c--nv52j-eth0 whisker-77b5df4b9c- calico-system 4d556d16-c6b3-4ab7-996f-c53ed792f703 1085 0 2025-11-05 15:54:59 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:77b5df4b9c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-77b5df4b9c-nv52j eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali445a2e936c7 [] [] }} ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-" Nov 5 15:55:01.076262 containerd[1641]: 2025-11-05 15:55:00.112 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.076262 containerd[1641]: 2025-11-05 15:55:00.819 [INFO][4239] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" HandleID="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Workload="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.820 [INFO][4239] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" HandleID="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Workload="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00048aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-77b5df4b9c-nv52j", "timestamp":"2025-11-05 15:55:00.819686423 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.820 [INFO][4239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.820 [INFO][4239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.820 [INFO][4239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.845 [INFO][4239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" host="localhost" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.884 [INFO][4239] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.889 [INFO][4239] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.898 [INFO][4239] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.901 [INFO][4239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:01.076916 containerd[1641]: 2025-11-05 15:55:00.901 [INFO][4239] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" host="localhost" Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:00.913 [INFO][4239] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688 Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:01.004 [INFO][4239] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" host="localhost" Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:01.022 [INFO][4239] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" host="localhost" Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:01.022 [INFO][4239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" host="localhost" Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:01.022 [INFO][4239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:01.077261 containerd[1641]: 2025-11-05 15:55:01.022 [INFO][4239] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" HandleID="k8s-pod-network.e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Workload="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.077450 containerd[1641]: 2025-11-05 15:55:01.027 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77b5df4b9c--nv52j-eth0", GenerateName:"whisker-77b5df4b9c-", Namespace:"calico-system", SelfLink:"", UID:"4d556d16-c6b3-4ab7-996f-c53ed792f703", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77b5df4b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-77b5df4b9c-nv52j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali445a2e936c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:01.077450 containerd[1641]: 2025-11-05 15:55:01.027 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.077568 containerd[1641]: 2025-11-05 15:55:01.027 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali445a2e936c7 ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.077568 containerd[1641]: 2025-11-05 15:55:01.049 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.077633 containerd[1641]: 2025-11-05 15:55:01.050 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--77b5df4b9c--nv52j-eth0", GenerateName:"whisker-77b5df4b9c-", Namespace:"calico-system", SelfLink:"", UID:"4d556d16-c6b3-4ab7-996f-c53ed792f703", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"77b5df4b9c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688", Pod:"whisker-77b5df4b9c-nv52j", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali445a2e936c7", MAC:"b2:7a:35:14:d2:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:01.077707 containerd[1641]: 2025-11-05 15:55:01.072 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" Namespace="calico-system" Pod="whisker-77b5df4b9c-nv52j" WorkloadEndpoint="localhost-k8s-whisker--77b5df4b9c--nv52j-eth0" Nov 5 15:55:01.298452 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:37650.service - OpenSSH per-connection server daemon (10.0.0.1:37650). Nov 5 15:55:01.375031 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 37650 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:01.377235 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:01.382533 systemd-logind[1620]: New session 9 of user core. Nov 5 15:55:01.390161 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:55:01.553554 sshd[4450]: Connection closed by 10.0.0.1 port 37650 Nov 5 15:55:01.554343 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:01.559090 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:37650.service: Deactivated successfully. Nov 5 15:55:01.561585 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:55:01.563324 systemd-logind[1620]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:55:01.565002 systemd-logind[1620]: Removed session 9. Nov 5 15:55:01.666650 containerd[1641]: time="2025-11-05T15:55:01.666580516Z" level=info msg="connecting to shim e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688" address="unix:///run/containerd/s/afddb427307c22e092136bd8aecba43242379cadad1d38149e227a6cb4a05149" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:01.692085 systemd[1]: Started cri-containerd-e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688.scope - libcontainer container e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688. Nov 5 15:55:01.721629 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:01.840368 containerd[1641]: time="2025-11-05T15:55:01.840303232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-77b5df4b9c-nv52j,Uid:4d556d16-c6b3-4ab7-996f-c53ed792f703,Namespace:calico-system,Attempt:0,} returns sandbox id \"e5fa8830bb07b76d09065188d94d8e3197f9fed61c40877726ac44479279b688\"" Nov 5 15:55:01.842254 containerd[1641]: time="2025-11-05T15:55:01.842213668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:02.213146 systemd-networkd[1529]: vxlan.calico: Gained IPv6LL Nov 5 15:55:02.269548 containerd[1641]: time="2025-11-05T15:55:02.269470390Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:02.310713 containerd[1641]: time="2025-11-05T15:55:02.310629783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:02.321454 containerd[1641]: time="2025-11-05T15:55:02.321358066Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:02.321773 kubelet[2813]: E1105 15:55:02.321714 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:02.322357 kubelet[2813]: E1105 15:55:02.321784 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:02.322357 kubelet[2813]: E1105 15:55:02.321969 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:02.323067 containerd[1641]: time="2025-11-05T15:55:02.323018217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:02.725276 systemd-networkd[1529]: cali445a2e936c7: Gained IPv6LL Nov 5 15:55:02.946859 containerd[1641]: time="2025-11-05T15:55:02.946783324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:02.968196 containerd[1641]: time="2025-11-05T15:55:02.967828359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:02.968196 containerd[1641]: time="2025-11-05T15:55:02.967917137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:02.968424 kubelet[2813]: E1105 15:55:02.968210 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:02.968424 kubelet[2813]: E1105 15:55:02.968276 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:02.968424 kubelet[2813]: E1105 15:55:02.968373 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:02.968540 kubelet[2813]: E1105 15:55:02.968434 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:03.372067 kubelet[2813]: E1105 15:55:03.371998 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:03.551194 kubelet[2813]: E1105 15:55:03.551124 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:06.550878 kubelet[2813]: E1105 15:55:06.550774 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:06.567082 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:37662.service - OpenSSH per-connection server daemon (10.0.0.1:37662). Nov 5 15:55:06.616519 containerd[1641]: time="2025-11-05T15:55:06.616454096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:06.634599 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 37662 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:06.637183 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:06.642764 systemd-logind[1620]: New session 10 of user core. Nov 5 15:55:06.657332 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:55:06.872192 sshd[4536]: Connection closed by 10.0.0.1 port 37662 Nov 5 15:55:06.872782 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:06.880085 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:37662.service: Deactivated successfully. Nov 5 15:55:06.882912 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:55:06.884283 systemd-logind[1620]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:55:06.886178 systemd-logind[1620]: Removed session 10. Nov 5 15:55:06.959173 systemd-networkd[1529]: cali15a2bcf1a70: Link UP Nov 5 15:55:06.960147 systemd-networkd[1529]: cali15a2bcf1a70: Gained carrier Nov 5 15:55:06.993153 containerd[1641]: 2025-11-05 15:55:06.868 [INFO][4524] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--gf82q-eth0 csi-node-driver- calico-system 5cbe1702-972a-4f84-9d2f-51b96b54edda 796 0 2025-11-05 15:54:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-gf82q eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali15a2bcf1a70 [] [] }} ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-" Nov 5 15:55:06.993153 containerd[1641]: 2025-11-05 15:55:06.868 [INFO][4524] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.993153 containerd[1641]: 2025-11-05 15:55:06.915 [INFO][4548] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" HandleID="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Workload="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.916 [INFO][4548] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" HandleID="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Workload="localhost-k8s-csi--node--driver--gf82q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4db0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-gf82q", "timestamp":"2025-11-05 15:55:06.915644408 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.916 [INFO][4548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.916 [INFO][4548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.916 [INFO][4548] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.925 [INFO][4548] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" host="localhost" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.930 [INFO][4548] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.934 [INFO][4548] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.936 [INFO][4548] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.938 [INFO][4548] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:06.993630 containerd[1641]: 2025-11-05 15:55:06.938 [INFO][4548] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" host="localhost" Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.939 [INFO][4548] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642 Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.944 [INFO][4548] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" host="localhost" Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.951 [INFO][4548] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" host="localhost" Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.951 [INFO][4548] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" host="localhost" Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.951 [INFO][4548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:06.994253 containerd[1641]: 2025-11-05 15:55:06.951 [INFO][4548] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" HandleID="k8s-pod-network.b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Workload="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.994637 containerd[1641]: 2025-11-05 15:55:06.955 [INFO][4524] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gf82q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5cbe1702-972a-4f84-9d2f-51b96b54edda", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-gf82q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15a2bcf1a70", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:06.994769 containerd[1641]: 2025-11-05 15:55:06.955 [INFO][4524] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.994769 containerd[1641]: 2025-11-05 15:55:06.955 [INFO][4524] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali15a2bcf1a70 ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.994769 containerd[1641]: 2025-11-05 15:55:06.959 [INFO][4524] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:06.994883 containerd[1641]: 2025-11-05 15:55:06.961 [INFO][4524] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--gf82q-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5cbe1702-972a-4f84-9d2f-51b96b54edda", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642", Pod:"csi-node-driver-gf82q", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali15a2bcf1a70", MAC:"92:ed:25:48:fb:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:06.994981 containerd[1641]: 2025-11-05 15:55:06.988 [INFO][4524] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" Namespace="calico-system" Pod="csi-node-driver-gf82q" WorkloadEndpoint="localhost-k8s-csi--node--driver--gf82q-eth0" Nov 5 15:55:07.344366 containerd[1641]: time="2025-11-05T15:55:07.344316302Z" level=info msg="connecting to shim b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642" address="unix:///run/containerd/s/0746a32c0485d87f0f398f31f6a26da9c8cbaf9b06c5e1216186969f33383eee" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:07.378221 systemd[1]: Started cri-containerd-b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642.scope - libcontainer container b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642. Nov 5 15:55:07.393106 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:07.418208 containerd[1641]: time="2025-11-05T15:55:07.418155463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-gf82q,Uid:5cbe1702-972a-4f84-9d2f-51b96b54edda,Namespace:calico-system,Attempt:0,} returns sandbox id \"b28483a72b801d80a6c7f0b912c538d21c968cb455fa49759e9b7c47e9aa5642\"" Nov 5 15:55:07.423306 containerd[1641]: time="2025-11-05T15:55:07.423037519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:07.577301 containerd[1641]: time="2025-11-05T15:55:07.577224120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:07.597725 kubelet[2813]: E1105 15:55:07.597617 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:07.598969 containerd[1641]: time="2025-11-05T15:55:07.598908147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:07.618042 containerd[1641]: time="2025-11-05T15:55:07.617968852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:07.780094 containerd[1641]: time="2025-11-05T15:55:07.779713180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:07.791888 containerd[1641]: time="2025-11-05T15:55:07.791753716Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:07.792286 containerd[1641]: time="2025-11-05T15:55:07.792081447Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:07.796265 kubelet[2813]: E1105 15:55:07.794274 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:07.796265 kubelet[2813]: E1105 15:55:07.794343 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:07.796265 kubelet[2813]: E1105 15:55:07.794654 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:07.807201 containerd[1641]: time="2025-11-05T15:55:07.807122501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:07.933088 systemd-networkd[1529]: calide0c2f0be05: Link UP Nov 5 15:55:07.935057 systemd-networkd[1529]: calide0c2f0be05: Gained carrier Nov 5 15:55:08.098820 containerd[1641]: 2025-11-05 15:55:07.668 [INFO][4616] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0 calico-apiserver-d76b985b9- calico-apiserver bc1f133a-26eb-43d7-9fdb-a3e47afd9653 924 0 2025-11-05 15:54:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d76b985b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d76b985b9-kbchr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calide0c2f0be05 [] [] }} ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-" Nov 5 15:55:08.098820 containerd[1641]: 2025-11-05 15:55:07.668 [INFO][4616] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.098820 containerd[1641]: 2025-11-05 15:55:07.817 [INFO][4650] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" HandleID="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Workload="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.817 [INFO][4650] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" HandleID="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Workload="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7bc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d76b985b9-kbchr", "timestamp":"2025-11-05 15:55:07.81764544 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.818 [INFO][4650] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.818 [INFO][4650] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.818 [INFO][4650] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.845 [INFO][4650] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" host="localhost" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.853 [INFO][4650] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.880 [INFO][4650] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.883 [INFO][4650] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.886 [INFO][4650] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.099164 containerd[1641]: 2025-11-05 15:55:07.886 [INFO][4650] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" host="localhost" Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.889 [INFO][4650] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16 Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.898 [INFO][4650] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" host="localhost" Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.922 [INFO][4650] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" host="localhost" Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.922 [INFO][4650] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" host="localhost" Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.923 [INFO][4650] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:08.099782 containerd[1641]: 2025-11-05 15:55:07.923 [INFO][4650] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" HandleID="k8s-pod-network.aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Workload="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.100070 containerd[1641]: 2025-11-05 15:55:07.928 [INFO][4616] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0", GenerateName:"calico-apiserver-d76b985b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc1f133a-26eb-43d7-9fdb-a3e47afd9653", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d76b985b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d76b985b9-kbchr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c2f0be05", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.100165 containerd[1641]: 2025-11-05 15:55:07.928 [INFO][4616] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.100165 containerd[1641]: 2025-11-05 15:55:07.928 [INFO][4616] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calide0c2f0be05 ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.100165 containerd[1641]: 2025-11-05 15:55:07.935 [INFO][4616] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.100262 containerd[1641]: 2025-11-05 15:55:07.936 [INFO][4616] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0", GenerateName:"calico-apiserver-d76b985b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"bc1f133a-26eb-43d7-9fdb-a3e47afd9653", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d76b985b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16", Pod:"calico-apiserver-d76b985b9-kbchr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calide0c2f0be05", MAC:"9e:fd:06:90:b3:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.100348 containerd[1641]: 2025-11-05 15:55:08.095 [INFO][4616] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-kbchr" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--kbchr-eth0" Nov 5 15:55:08.162684 containerd[1641]: time="2025-11-05T15:55:08.162621223Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:08.170897 containerd[1641]: time="2025-11-05T15:55:08.168301599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:08.170897 containerd[1641]: time="2025-11-05T15:55:08.169949543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:08.171292 kubelet[2813]: E1105 15:55:08.170973 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:08.171949 kubelet[2813]: E1105 15:55:08.171157 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:08.171949 kubelet[2813]: E1105 15:55:08.171796 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:08.171949 kubelet[2813]: E1105 15:55:08.171862 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:08.178612 systemd-networkd[1529]: cali1a5510ca266: Link UP Nov 5 15:55:08.181254 systemd-networkd[1529]: cali1a5510ca266: Gained carrier Nov 5 15:55:08.230958 containerd[1641]: time="2025-11-05T15:55:08.230659649Z" level=info msg="connecting to shim aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16" address="unix:///run/containerd/s/8c0348a6e976133c43f4cc0c0336925ebf3180795f3d15908b340f2b558f271f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:08.260995 containerd[1641]: 2025-11-05 15:55:07.703 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--ch7wn-eth0 coredns-66bc5c9577- kube-system e9514e0b-1fb1-4b7f-898a-2d78ba283593 922 0 2025-11-05 15:53:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-ch7wn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1a5510ca266 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-" Nov 5 15:55:08.260995 containerd[1641]: 2025-11-05 15:55:07.703 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.260995 containerd[1641]: 2025-11-05 15:55:07.843 [INFO][4656] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" HandleID="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Workload="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:07.843 [INFO][4656] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" HandleID="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Workload="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025b0a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-ch7wn", "timestamp":"2025-11-05 15:55:07.843222318 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:07.843 [INFO][4656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:07.923 [INFO][4656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:07.923 [INFO][4656] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:07.944 [INFO][4656] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" host="localhost" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:08.101 [INFO][4656] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:08.114 [INFO][4656] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:08.120 [INFO][4656] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:08.125 [INFO][4656] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.261287 containerd[1641]: 2025-11-05 15:55:08.125 [INFO][4656] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" host="localhost" Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.128 [INFO][4656] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349 Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.155 [INFO][4656] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" host="localhost" Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.165 [INFO][4656] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" host="localhost" Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.165 [INFO][4656] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" host="localhost" Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.165 [INFO][4656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:08.261593 containerd[1641]: 2025-11-05 15:55:08.166 [INFO][4656] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" HandleID="k8s-pod-network.8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Workload="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.172 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ch7wn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9514e0b-1fb1-4b7f-898a-2d78ba283593", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-ch7wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a5510ca266", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.174 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.174 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a5510ca266 ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.181 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.182 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--ch7wn-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e9514e0b-1fb1-4b7f-898a-2d78ba283593", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349", Pod:"coredns-66bc5c9577-ch7wn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1a5510ca266", MAC:"fa:30:48:e6:63:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.262055 containerd[1641]: 2025-11-05 15:55:08.247 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" Namespace="kube-system" Pod="coredns-66bc5c9577-ch7wn" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--ch7wn-eth0" Nov 5 15:55:08.285113 systemd[1]: Started cri-containerd-aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16.scope - libcontainer container aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16. Nov 5 15:55:08.306325 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:08.366555 containerd[1641]: time="2025-11-05T15:55:08.366486598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-kbchr,Uid:bc1f133a-26eb-43d7-9fdb-a3e47afd9653,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aa43b6b709d02f7c628e0a12db991a3561b44433728cdf8c9051bfb653959e16\"" Nov 5 15:55:08.370075 containerd[1641]: time="2025-11-05T15:55:08.370037037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:08.387866 kubelet[2813]: E1105 15:55:08.387754 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:08.390662 systemd-networkd[1529]: cali93efc188321: Link UP Nov 5 15:55:08.393861 systemd-networkd[1529]: cali93efc188321: Gained carrier Nov 5 15:55:08.409292 containerd[1641]: time="2025-11-05T15:55:08.409206967Z" level=info msg="connecting to shim 8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349" address="unix:///run/containerd/s/6eaa07a0a60bbb50e4951bc0a48e49ab1f31f312f7d83f81261e3c9ca4b9b730" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:07.854 [INFO][4658] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0 calico-kube-controllers-757d4c4c4d- calico-system b05fd954-e904-4df9-a183-93526853dbb1 928 0 2025-11-05 15:54:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:757d4c4c4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-757d4c4c4d-gc5kt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali93efc188321 [] [] }} ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:07.854 [INFO][4658] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:07.908 [INFO][4682] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" HandleID="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Workload="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:07.908 [INFO][4682] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" HandleID="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Workload="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-757d4c4c4d-gc5kt", "timestamp":"2025-11-05 15:55:07.908378999 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:07.908 [INFO][4682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.166 [INFO][4682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.166 [INFO][4682] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.180 [INFO][4682] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.265 [INFO][4682] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.321 [INFO][4682] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.326 [INFO][4682] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.328 [INFO][4682] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.328 [INFO][4682] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.330 [INFO][4682] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.368 [INFO][4682] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.381 [INFO][4682] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.381 [INFO][4682] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" host="localhost" Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.381 [INFO][4682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:08.431660 containerd[1641]: 2025-11-05 15:55:08.381 [INFO][4682] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" HandleID="k8s-pod-network.b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Workload="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.385 [INFO][4658] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0", GenerateName:"calico-kube-controllers-757d4c4c4d-", Namespace:"calico-system", SelfLink:"", UID:"b05fd954-e904-4df9-a183-93526853dbb1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757d4c4c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-757d4c4c4d-gc5kt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efc188321", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.386 [INFO][4658] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.386 [INFO][4658] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93efc188321 ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.395 [INFO][4658] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.397 [INFO][4658] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0", GenerateName:"calico-kube-controllers-757d4c4c4d-", Namespace:"calico-system", SelfLink:"", UID:"b05fd954-e904-4df9-a183-93526853dbb1", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"757d4c4c4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b", Pod:"calico-kube-controllers-757d4c4c4d-gc5kt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali93efc188321", MAC:"2a:02:c7:01:b0:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.432755 containerd[1641]: 2025-11-05 15:55:08.422 [INFO][4658] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" Namespace="calico-system" Pod="calico-kube-controllers-757d4c4c4d-gc5kt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--757d4c4c4d--gc5kt-eth0" Nov 5 15:55:08.454261 systemd[1]: Started cri-containerd-8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349.scope - libcontainer container 8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349. Nov 5 15:55:08.477948 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:08.478838 containerd[1641]: time="2025-11-05T15:55:08.478785512Z" level=info msg="connecting to shim b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b" address="unix:///run/containerd/s/5eee741c4655619c3920f6506ea41b61b3f920a0ff7afaaa61505a9f372db4fb" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:08.517209 systemd[1]: Started cri-containerd-b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b.scope - libcontainer container b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b. Nov 5 15:55:08.528727 containerd[1641]: time="2025-11-05T15:55:08.528658558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ch7wn,Uid:e9514e0b-1fb1-4b7f-898a-2d78ba283593,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349\"" Nov 5 15:55:08.530111 kubelet[2813]: E1105 15:55:08.530069 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:08.538201 containerd[1641]: time="2025-11-05T15:55:08.538128517Z" level=info msg="CreateContainer within sandbox \"8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:55:08.546048 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:08.555197 kubelet[2813]: E1105 15:55:08.555086 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:08.555843 containerd[1641]: time="2025-11-05T15:55:08.555811011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,}" Nov 5 15:55:08.566425 containerd[1641]: time="2025-11-05T15:55:08.566382719Z" level=info msg="Container 605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:08.579415 containerd[1641]: time="2025-11-05T15:55:08.579273992Z" level=info msg="CreateContainer within sandbox \"8e773a51dd802a027a686223ca8c733e741889fd3c2a67dd8b56bdc73a085349\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825\"" Nov 5 15:55:08.582329 containerd[1641]: time="2025-11-05T15:55:08.582289176Z" level=info msg="StartContainer for \"605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825\"" Nov 5 15:55:08.586106 containerd[1641]: time="2025-11-05T15:55:08.586000350Z" level=info msg="connecting to shim 605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825" address="unix:///run/containerd/s/6eaa07a0a60bbb50e4951bc0a48e49ab1f31f312f7d83f81261e3c9ca4b9b730" protocol=ttrpc version=3 Nov 5 15:55:08.590944 containerd[1641]: time="2025-11-05T15:55:08.590790409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-757d4c4c4d-gc5kt,Uid:b05fd954-e904-4df9-a183-93526853dbb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b506a781e6c73089d7757bb7d2f2f041eb3aa92d7f415e78e06ce8e50f59c92b\"" Nov 5 15:55:08.623029 systemd[1]: Started cri-containerd-605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825.scope - libcontainer container 605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825. Nov 5 15:55:08.700478 containerd[1641]: time="2025-11-05T15:55:08.700420125Z" level=info msg="StartContainer for \"605045a502a22d4ade892274ebe88f0b9d6d420962553b64b71e33c0e36db825\" returns successfully" Nov 5 15:55:08.726996 containerd[1641]: time="2025-11-05T15:55:08.726819590Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:08.729889 containerd[1641]: time="2025-11-05T15:55:08.729732752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:08.730074 containerd[1641]: time="2025-11-05T15:55:08.729842660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:08.730196 kubelet[2813]: E1105 15:55:08.730135 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:08.730662 kubelet[2813]: E1105 15:55:08.730206 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:08.730662 kubelet[2813]: E1105 15:55:08.730471 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:08.730662 kubelet[2813]: E1105 15:55:08.730516 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:55:08.731123 containerd[1641]: time="2025-11-05T15:55:08.731086448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:08.764026 systemd-networkd[1529]: calie0f3152ddf6: Link UP Nov 5 15:55:08.767533 systemd-networkd[1529]: calie0f3152ddf6: Gained carrier Nov 5 15:55:08.805623 systemd-networkd[1529]: cali15a2bcf1a70: Gained IPv6LL Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.611 [INFO][4854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--66bc5c9577--sdjz8-eth0 coredns-66bc5c9577- kube-system b599e586-f36d-4082-a717-ffeb6bad40b3 933 0 2025-11-05 15:53:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-66bc5c9577-sdjz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0f3152ddf6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.611 [INFO][4854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.661 [INFO][4887] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" HandleID="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Workload="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.661 [INFO][4887] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" HandleID="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Workload="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000528090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-66bc5c9577-sdjz8", "timestamp":"2025-11-05 15:55:08.661345726 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.661 [INFO][4887] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.661 [INFO][4887] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.661 [INFO][4887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.673 [INFO][4887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.694 [INFO][4887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.702 [INFO][4887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.708 [INFO][4887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.712 [INFO][4887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.713 [INFO][4887] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.717 [INFO][4887] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4 Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.726 [INFO][4887] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.753 [INFO][4887] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.753 [INFO][4887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" host="localhost" Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.753 [INFO][4887] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:08.821285 containerd[1641]: 2025-11-05 15:55:08.753 [INFO][4887] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" HandleID="k8s-pod-network.fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Workload="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.761 [INFO][4854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sdjz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b599e586-f36d-4082-a717-ffeb6bad40b3", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-66bc5c9577-sdjz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0f3152ddf6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.761 [INFO][4854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.761 [INFO][4854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0f3152ddf6 ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.766 [INFO][4854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.766 [INFO][4854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--66bc5c9577--sdjz8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"b599e586-f36d-4082-a717-ffeb6bad40b3", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 53, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4", Pod:"coredns-66bc5c9577-sdjz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0f3152ddf6", MAC:"52:a5:d9:f5:8c:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:08.822624 containerd[1641]: 2025-11-05 15:55:08.809 [INFO][4854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" Namespace="kube-system" Pod="coredns-66bc5c9577-sdjz8" WorkloadEndpoint="localhost-k8s-coredns--66bc5c9577--sdjz8-eth0" Nov 5 15:55:08.868172 containerd[1641]: time="2025-11-05T15:55:08.868111607Z" level=info msg="connecting to shim fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4" address="unix:///run/containerd/s/16fd61fd90ec33b94b06b2812a97f179e96ee0a0b8ef9f461578b0ace0185aef" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:08.911216 systemd[1]: Started cri-containerd-fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4.scope - libcontainer container fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4. Nov 5 15:55:08.928166 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:09.070372 containerd[1641]: time="2025-11-05T15:55:09.070170499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-sdjz8,Uid:b599e586-f36d-4082-a717-ffeb6bad40b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4\"" Nov 5 15:55:09.071136 kubelet[2813]: E1105 15:55:09.071102 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:09.098580 containerd[1641]: time="2025-11-05T15:55:09.098472679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:09.226546 containerd[1641]: time="2025-11-05T15:55:09.226347283Z" level=info msg="CreateContainer within sandbox \"fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:55:09.227546 containerd[1641]: time="2025-11-05T15:55:09.226900933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:09.227546 containerd[1641]: time="2025-11-05T15:55:09.226974371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:09.227890 kubelet[2813]: E1105 15:55:09.227198 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:09.227890 kubelet[2813]: E1105 15:55:09.227252 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:09.227890 kubelet[2813]: E1105 15:55:09.227358 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:09.227890 kubelet[2813]: E1105 15:55:09.227407 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:09.270962 containerd[1641]: time="2025-11-05T15:55:09.270707292Z" level=info msg="Container 5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:55:09.291099 containerd[1641]: time="2025-11-05T15:55:09.291028160Z" level=info msg="CreateContainer within sandbox \"fde785543e64efd3ae316333ab6ef1fbd5e5dd75052830e9c3fc6c786365fee4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df\"" Nov 5 15:55:09.292059 containerd[1641]: time="2025-11-05T15:55:09.292002626Z" level=info msg="StartContainer for \"5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df\"" Nov 5 15:55:09.293748 containerd[1641]: time="2025-11-05T15:55:09.293663914Z" level=info msg="connecting to shim 5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df" address="unix:///run/containerd/s/16fd61fd90ec33b94b06b2812a97f179e96ee0a0b8ef9f461578b0ace0185aef" protocol=ttrpc version=3 Nov 5 15:55:09.321144 systemd[1]: Started cri-containerd-5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df.scope - libcontainer container 5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df. Nov 5 15:55:09.386171 containerd[1641]: time="2025-11-05T15:55:09.386124511Z" level=info msg="StartContainer for \"5f5940e0a4fcf479d40a264eadf6f01d59ba561aa39c7efe8e6c00590d2ec7df\" returns successfully" Nov 5 15:55:09.395156 kubelet[2813]: E1105 15:55:09.395047 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:09.400262 kubelet[2813]: E1105 15:55:09.400158 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:09.401905 kubelet[2813]: E1105 15:55:09.401801 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:09.404197 kubelet[2813]: E1105 15:55:09.403473 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:55:09.405239 kubelet[2813]: E1105 15:55:09.405185 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:09.472046 kubelet[2813]: I1105 15:55:09.471617 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-sdjz8" podStartSLOduration=82.471592607 podStartE2EDuration="1m22.471592607s" podCreationTimestamp="2025-11-05 15:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:55:09.434298598 +0000 UTC m=+87.027174337" watchObservedRunningTime="2025-11-05 15:55:09.471592607 +0000 UTC m=+87.064468336" Nov 5 15:55:09.555955 containerd[1641]: time="2025-11-05T15:55:09.555427819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:55:09.559711 kubelet[2813]: I1105 15:55:09.558518 2813 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ch7wn" podStartSLOduration=82.558493969 podStartE2EDuration="1m22.558493969s" podCreationTimestamp="2025-11-05 15:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:55:09.513362237 +0000 UTC m=+87.106237976" watchObservedRunningTime="2025-11-05 15:55:09.558493969 +0000 UTC m=+87.151369708" Nov 5 15:55:09.560966 containerd[1641]: time="2025-11-05T15:55:09.560886733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,}" Nov 5 15:55:09.701112 systemd-networkd[1529]: cali93efc188321: Gained IPv6LL Nov 5 15:55:09.895063 systemd-networkd[1529]: calide0c2f0be05: Gained IPv6LL Nov 5 15:55:09.895282 systemd-networkd[1529]: cali1a5510ca266: Gained IPv6LL Nov 5 15:55:09.895502 systemd-networkd[1529]: calia16a79f4fe6: Link UP Nov 5 15:55:09.898056 systemd-networkd[1529]: calia16a79f4fe6: Gained carrier Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.644 [INFO][5007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0 calico-apiserver-d76b985b9- calico-apiserver e369c643-3d7c-424a-939d-fd5462f1f671 930 0 2025-11-05 15:54:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d76b985b9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d76b985b9-z9rht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia16a79f4fe6 [] [] }} ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.644 [INFO][5007] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.689 [INFO][5037] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" HandleID="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Workload="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.689 [INFO][5037] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" HandleID="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Workload="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e820), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d76b985b9-z9rht", "timestamp":"2025-11-05 15:55:09.689556484 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.689 [INFO][5037] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.690 [INFO][5037] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.690 [INFO][5037] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.770 [INFO][5037] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.779 [INFO][5037] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.785 [INFO][5037] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.787 [INFO][5037] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.792 [INFO][5037] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.792 [INFO][5037] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.794 [INFO][5037] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8 Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.851 [INFO][5037] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5037] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5037] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" host="localhost" Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5037] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:09.976398 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5037] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" HandleID="k8s-pod-network.c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Workload="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.891 [INFO][5007] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0", GenerateName:"calico-apiserver-d76b985b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369c643-3d7c-424a-939d-fd5462f1f671", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d76b985b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d76b985b9-z9rht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia16a79f4fe6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.891 [INFO][5007] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.891 [INFO][5007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia16a79f4fe6 ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.896 [INFO][5007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.898 [INFO][5007] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0", GenerateName:"calico-apiserver-d76b985b9-", Namespace:"calico-apiserver", SelfLink:"", UID:"e369c643-3d7c-424a-939d-fd5462f1f671", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d76b985b9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8", Pod:"calico-apiserver-d76b985b9-z9rht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia16a79f4fe6", MAC:"f2:77:09:30:57:fc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:09.978079 containerd[1641]: 2025-11-05 15:55:09.971 [INFO][5007] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" Namespace="calico-apiserver" Pod="calico-apiserver-d76b985b9-z9rht" WorkloadEndpoint="localhost-k8s-calico--apiserver--d76b985b9--z9rht-eth0" Nov 5 15:55:10.127726 systemd-networkd[1529]: cali30f2be9c25c: Link UP Nov 5 15:55:10.128547 systemd-networkd[1529]: cali30f2be9c25c: Gained carrier Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.647 [INFO][5015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7c778bb748--mswwg-eth0 goldmane-7c778bb748- calico-system 48c2a4a5-482d-4600-8d80-4c89933cceaa 931 0 2025-11-05 15:54:09 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7c778bb748-mswwg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali30f2be9c25c [] [] }} ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.647 [INFO][5015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.704 [INFO][5035] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" HandleID="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Workload="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.704 [INFO][5035] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" HandleID="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Workload="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fcd0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7c778bb748-mswwg", "timestamp":"2025-11-05 15:55:09.704285655 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.704 [INFO][5035] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5035] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.887 [INFO][5035] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.899 [INFO][5035] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.908 [INFO][5035] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.977 [INFO][5035] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.981 [INFO][5035] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.984 [INFO][5035] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:09.984 [INFO][5035] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.047 [INFO][5035] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197 Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.060 [INFO][5035] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.119 [INFO][5035] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.119 [INFO][5035] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" host="localhost" Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.119 [INFO][5035] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:55:10.355945 containerd[1641]: 2025-11-05 15:55:10.119 [INFO][5035] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" HandleID="k8s-pod-network.02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Workload="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.124 [INFO][5015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mswwg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"48c2a4a5-482d-4600-8d80-4c89933cceaa", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7c778bb748-mswwg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali30f2be9c25c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.124 [INFO][5015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.124 [INFO][5015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30f2be9c25c ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.129 [INFO][5015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.130 [INFO][5015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7c778bb748--mswwg-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"48c2a4a5-482d-4600-8d80-4c89933cceaa", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 54, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197", Pod:"goldmane-7c778bb748-mswwg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali30f2be9c25c", MAC:"66:17:1b:7c:a8:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:55:10.356769 containerd[1641]: 2025-11-05 15:55:10.351 [INFO][5015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" Namespace="calico-system" Pod="goldmane-7c778bb748-mswwg" WorkloadEndpoint="localhost-k8s-goldmane--7c778bb748--mswwg-eth0" Nov 5 15:55:10.404082 kubelet[2813]: E1105 15:55:10.403717 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:10.404082 kubelet[2813]: E1105 15:55:10.404073 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:10.406207 kubelet[2813]: E1105 15:55:10.406155 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:10.495951 containerd[1641]: time="2025-11-05T15:55:10.495540395Z" level=info msg="connecting to shim c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8" address="unix:///run/containerd/s/3c4153c87503efba05172a535848b1a58a3f23d5f2e4d7dd23fdaf7a1d99e6a9" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:10.522082 systemd[1]: Started cri-containerd-c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8.scope - libcontainer container c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8. Nov 5 15:55:10.537821 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:10.586010 containerd[1641]: time="2025-11-05T15:55:10.585896910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d76b985b9-z9rht,Uid:e369c643-3d7c-424a-939d-fd5462f1f671,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c8aed4913316e0b5d0c02411135f02b94cbf7e9545dc1d7b9f5c1e8f3a961ca8\"" Nov 5 15:55:10.587979 containerd[1641]: time="2025-11-05T15:55:10.587940220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:10.639073 containerd[1641]: time="2025-11-05T15:55:10.638885097Z" level=info msg="connecting to shim 02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197" address="unix:///run/containerd/s/60c2b2d3e2b4976e3576e2ea658ddc8bb3a8d22c4fd8512e9d8783c07162ba44" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:55:10.662588 systemd-networkd[1529]: calie0f3152ddf6: Gained IPv6LL Nov 5 15:55:10.687255 systemd[1]: Started cri-containerd-02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197.scope - libcontainer container 02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197. Nov 5 15:55:10.709334 systemd-resolved[1322]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 15:55:10.750436 containerd[1641]: time="2025-11-05T15:55:10.750390819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-mswwg,Uid:48c2a4a5-482d-4600-8d80-4c89933cceaa,Namespace:calico-system,Attempt:0,} returns sandbox id \"02e442ef0b992a537d0efe910d9a188a6fcc8f2a043cdafc758c9398394a1197\"" Nov 5 15:55:11.022903 containerd[1641]: time="2025-11-05T15:55:11.022033009Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:11.029835 containerd[1641]: time="2025-11-05T15:55:11.029670435Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:11.029835 containerd[1641]: time="2025-11-05T15:55:11.029787767Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:11.032578 kubelet[2813]: E1105 15:55:11.030289 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:11.032578 kubelet[2813]: E1105 15:55:11.031726 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:11.033288 containerd[1641]: time="2025-11-05T15:55:11.033202766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:11.035643 kubelet[2813]: E1105 15:55:11.034085 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:11.035643 kubelet[2813]: E1105 15:55:11.034139 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:11.409291 kubelet[2813]: E1105 15:55:11.409144 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:11.409291 kubelet[2813]: E1105 15:55:11.409231 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:11.410026 kubelet[2813]: E1105 15:55:11.409998 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:11.456717 containerd[1641]: time="2025-11-05T15:55:11.456638671Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:11.495065 containerd[1641]: time="2025-11-05T15:55:11.494874906Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:11.495065 containerd[1641]: time="2025-11-05T15:55:11.494951712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:11.495401 kubelet[2813]: E1105 15:55:11.495306 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:11.495401 kubelet[2813]: E1105 15:55:11.495365 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:11.495654 kubelet[2813]: E1105 15:55:11.495478 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:11.495654 kubelet[2813]: E1105 15:55:11.495530 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:55:11.814224 systemd-networkd[1529]: calia16a79f4fe6: Gained IPv6LL Nov 5 15:55:11.889399 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:60704.service - OpenSSH per-connection server daemon (10.0.0.1:60704). Nov 5 15:55:11.962884 sshd[5181]: Accepted publickey for core from 10.0.0.1 port 60704 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:11.965104 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:11.970391 systemd-logind[1620]: New session 11 of user core. Nov 5 15:55:11.977073 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:55:12.005128 systemd-networkd[1529]: cali30f2be9c25c: Gained IPv6LL Nov 5 15:55:12.140985 sshd[5185]: Connection closed by 10.0.0.1 port 60704 Nov 5 15:55:12.141795 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:12.147376 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:60704.service: Deactivated successfully. Nov 5 15:55:12.149695 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:55:12.150765 systemd-logind[1620]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:55:12.152624 systemd-logind[1620]: Removed session 11. Nov 5 15:55:12.412517 kubelet[2813]: E1105 15:55:12.412327 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:55:12.412517 kubelet[2813]: E1105 15:55:12.412343 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:16.551104 kubelet[2813]: E1105 15:55:16.551032 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:17.158227 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:60712.service - OpenSSH per-connection server daemon (10.0.0.1:60712). Nov 5 15:55:17.228320 sshd[5207]: Accepted publickey for core from 10.0.0.1 port 60712 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:17.230297 sshd-session[5207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:17.235308 systemd-logind[1620]: New session 12 of user core. Nov 5 15:55:17.246103 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:55:17.479110 sshd[5210]: Connection closed by 10.0.0.1 port 60712 Nov 5 15:55:17.481473 sshd-session[5207]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:17.486962 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:60712.service: Deactivated successfully. Nov 5 15:55:17.489098 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:55:17.489966 systemd-logind[1620]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:55:17.491392 systemd-logind[1620]: Removed session 12. Nov 5 15:55:17.550986 containerd[1641]: time="2025-11-05T15:55:17.550898935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:18.020621 containerd[1641]: time="2025-11-05T15:55:18.020538890Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:18.156207 containerd[1641]: time="2025-11-05T15:55:18.156101678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:18.156413 containerd[1641]: time="2025-11-05T15:55:18.156144328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:18.156580 kubelet[2813]: E1105 15:55:18.156518 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:18.156580 kubelet[2813]: E1105 15:55:18.156573 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:18.157136 kubelet[2813]: E1105 15:55:18.156676 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:18.157686 containerd[1641]: time="2025-11-05T15:55:18.157636311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:18.547180 containerd[1641]: time="2025-11-05T15:55:18.547114629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:18.598693 containerd[1641]: time="2025-11-05T15:55:18.598513806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:18.598693 containerd[1641]: time="2025-11-05T15:55:18.598597615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:18.599339 kubelet[2813]: E1105 15:55:18.599057 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:18.599339 kubelet[2813]: E1105 15:55:18.599115 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:18.599339 kubelet[2813]: E1105 15:55:18.599220 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:18.599493 kubelet[2813]: E1105 15:55:18.599280 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:21.553474 containerd[1641]: time="2025-11-05T15:55:21.553387875Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:21.898146 containerd[1641]: time="2025-11-05T15:55:21.898090179Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:21.899694 containerd[1641]: time="2025-11-05T15:55:21.899634098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:21.899775 containerd[1641]: time="2025-11-05T15:55:21.899710604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:21.900016 kubelet[2813]: E1105 15:55:21.899946 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:21.900016 kubelet[2813]: E1105 15:55:21.900013 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:21.900495 kubelet[2813]: E1105 15:55:21.900103 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:21.906559 containerd[1641]: time="2025-11-05T15:55:21.906507075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:22.260051 containerd[1641]: time="2025-11-05T15:55:22.259846481Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:22.261889 containerd[1641]: time="2025-11-05T15:55:22.261772923Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:22.261981 containerd[1641]: time="2025-11-05T15:55:22.261862923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:22.262245 kubelet[2813]: E1105 15:55:22.262163 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:22.262245 kubelet[2813]: E1105 15:55:22.262238 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:22.262381 kubelet[2813]: E1105 15:55:22.262353 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:22.262492 kubelet[2813]: E1105 15:55:22.262420 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:22.496975 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:40806.service - OpenSSH per-connection server daemon (10.0.0.1:40806). Nov 5 15:55:22.555305 containerd[1641]: time="2025-11-05T15:55:22.555003469Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:22.572483 sshd[5234]: Accepted publickey for core from 10.0.0.1 port 40806 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:22.574585 sshd-session[5234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:22.580126 systemd-logind[1620]: New session 13 of user core. Nov 5 15:55:22.593154 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:55:22.735789 sshd[5237]: Connection closed by 10.0.0.1 port 40806 Nov 5 15:55:22.735443 sshd-session[5234]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:22.748100 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:40806.service: Deactivated successfully. Nov 5 15:55:22.750942 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:55:22.752685 systemd-logind[1620]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:55:22.756493 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:40818.service - OpenSSH per-connection server daemon (10.0.0.1:40818). Nov 5 15:55:22.757296 systemd-logind[1620]: Removed session 13. Nov 5 15:55:22.825094 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 40818 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:22.827882 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:22.835278 systemd-logind[1620]: New session 14 of user core. Nov 5 15:55:22.845092 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:55:22.903391 containerd[1641]: time="2025-11-05T15:55:22.903283214Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:23.033034 containerd[1641]: time="2025-11-05T15:55:23.032960824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:23.033222 containerd[1641]: time="2025-11-05T15:55:23.032959381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:23.033347 kubelet[2813]: E1105 15:55:23.033278 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:23.033761 kubelet[2813]: E1105 15:55:23.033345 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:23.033761 kubelet[2813]: E1105 15:55:23.033467 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:23.033761 kubelet[2813]: E1105 15:55:23.033510 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:23.525266 sshd[5254]: Connection closed by 10.0.0.1 port 40818 Nov 5 15:55:23.525647 sshd-session[5251]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:23.538769 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:40818.service: Deactivated successfully. Nov 5 15:55:23.541451 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:55:23.542393 systemd-logind[1620]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:55:23.546406 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:40832.service - OpenSSH per-connection server daemon (10.0.0.1:40832). Nov 5 15:55:23.547146 systemd-logind[1620]: Removed session 14. Nov 5 15:55:23.552539 containerd[1641]: time="2025-11-05T15:55:23.552266460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:23.598609 sshd[5265]: Accepted publickey for core from 10.0.0.1 port 40832 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:23.600398 sshd-session[5265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:23.605865 systemd-logind[1620]: New session 15 of user core. Nov 5 15:55:23.613066 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:55:23.955396 sshd[5268]: Connection closed by 10.0.0.1 port 40832 Nov 5 15:55:23.955720 sshd-session[5265]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:23.960771 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:40832.service: Deactivated successfully. Nov 5 15:55:23.963145 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:55:23.964162 systemd-logind[1620]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:55:23.965850 systemd-logind[1620]: Removed session 15. Nov 5 15:55:24.025868 containerd[1641]: time="2025-11-05T15:55:24.025776045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:24.068477 containerd[1641]: time="2025-11-05T15:55:24.068355908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:24.068477 containerd[1641]: time="2025-11-05T15:55:24.068422173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:24.068857 kubelet[2813]: E1105 15:55:24.068792 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:24.069312 kubelet[2813]: E1105 15:55:24.068863 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:24.069312 kubelet[2813]: E1105 15:55:24.068991 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:24.069312 kubelet[2813]: E1105 15:55:24.069068 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:55:24.554471 kubelet[2813]: E1105 15:55:24.554425 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:24.555865 containerd[1641]: time="2025-11-05T15:55:24.555555075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:25.089743 containerd[1641]: time="2025-11-05T15:55:25.089642407Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:25.218791 containerd[1641]: time="2025-11-05T15:55:25.218652487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:25.218791 containerd[1641]: time="2025-11-05T15:55:25.218735393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:25.219216 kubelet[2813]: E1105 15:55:25.219145 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:25.219633 kubelet[2813]: E1105 15:55:25.219219 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:25.219633 kubelet[2813]: E1105 15:55:25.219320 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:25.219633 kubelet[2813]: E1105 15:55:25.219394 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:55:26.560118 containerd[1641]: time="2025-11-05T15:55:26.559386658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:26.941074 containerd[1641]: time="2025-11-05T15:55:26.941014996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:26.959090 containerd[1641]: time="2025-11-05T15:55:26.958980325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:26.959277 containerd[1641]: time="2025-11-05T15:55:26.959150196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:26.959457 kubelet[2813]: E1105 15:55:26.959379 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:26.960116 kubelet[2813]: E1105 15:55:26.959468 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:26.960116 kubelet[2813]: E1105 15:55:26.959604 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:26.960116 kubelet[2813]: E1105 15:55:26.959662 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:27.490391 containerd[1641]: time="2025-11-05T15:55:27.490310260Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\" id:\"73697109df429eebd6105cdc3b07e1ccc6d54d530a6fb0196013fc3b0db92b4a\" pid:5292 exited_at:{seconds:1762358127 nanos:489776231}" Nov 5 15:55:27.495696 kubelet[2813]: E1105 15:55:27.495635 2813 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:55:28.969215 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:40834.service - OpenSSH per-connection server daemon (10.0.0.1:40834). Nov 5 15:55:29.027712 sshd[5310]: Accepted publickey for core from 10.0.0.1 port 40834 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:29.029870 sshd-session[5310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:29.034441 systemd-logind[1620]: New session 16 of user core. Nov 5 15:55:29.041183 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:55:29.212305 sshd[5313]: Connection closed by 10.0.0.1 port 40834 Nov 5 15:55:29.212642 sshd-session[5310]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:29.217681 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:40834.service: Deactivated successfully. Nov 5 15:55:29.220354 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:55:29.221737 systemd-logind[1620]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:55:29.223435 systemd-logind[1620]: Removed session 16. Nov 5 15:55:30.552372 kubelet[2813]: E1105 15:55:30.552273 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:34.231274 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:39936.service - OpenSSH per-connection server daemon (10.0.0.1:39936). Nov 5 15:55:34.288644 sshd[5330]: Accepted publickey for core from 10.0.0.1 port 39936 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:34.290644 sshd-session[5330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:34.296842 systemd-logind[1620]: New session 17 of user core. Nov 5 15:55:34.310273 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:55:34.560570 sshd[5333]: Connection closed by 10.0.0.1 port 39936 Nov 5 15:55:34.560856 sshd-session[5330]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:34.566005 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:39936.service: Deactivated successfully. Nov 5 15:55:34.568412 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:55:34.569435 systemd-logind[1620]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:55:34.570753 systemd-logind[1620]: Removed session 17. Nov 5 15:55:36.551602 kubelet[2813]: E1105 15:55:36.551535 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:55:36.552376 kubelet[2813]: E1105 15:55:36.552306 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:37.551875 kubelet[2813]: E1105 15:55:37.551470 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:38.552886 kubelet[2813]: E1105 15:55:38.552821 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:39.551364 kubelet[2813]: E1105 15:55:39.551251 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:55:39.580606 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:39940.service - OpenSSH per-connection server daemon (10.0.0.1:39940). Nov 5 15:55:39.633562 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 39940 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:39.635395 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:39.640285 systemd-logind[1620]: New session 18 of user core. Nov 5 15:55:39.652238 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:55:39.784015 sshd[5351]: Connection closed by 10.0.0.1 port 39940 Nov 5 15:55:39.784367 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:39.789120 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:39940.service: Deactivated successfully. Nov 5 15:55:39.791528 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:55:39.792480 systemd-logind[1620]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:55:39.793844 systemd-logind[1620]: Removed session 18. Nov 5 15:55:42.552694 containerd[1641]: time="2025-11-05T15:55:42.552442740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:55:42.891478 containerd[1641]: time="2025-11-05T15:55:42.891397868Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:42.894762 containerd[1641]: time="2025-11-05T15:55:42.894672797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:55:42.894885 containerd[1641]: time="2025-11-05T15:55:42.894825365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:55:42.897321 kubelet[2813]: E1105 15:55:42.897261 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:42.897736 kubelet[2813]: E1105 15:55:42.897321 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:55:42.897736 kubelet[2813]: E1105 15:55:42.897423 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:42.898835 containerd[1641]: time="2025-11-05T15:55:42.898279823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:55:43.251714 containerd[1641]: time="2025-11-05T15:55:43.251540338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:43.318844 containerd[1641]: time="2025-11-05T15:55:43.318732464Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:43.318844 containerd[1641]: time="2025-11-05T15:55:43.318783331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:55:43.319205 kubelet[2813]: E1105 15:55:43.319135 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:43.319278 kubelet[2813]: E1105 15:55:43.319203 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:55:43.319397 kubelet[2813]: E1105 15:55:43.319306 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-77b5df4b9c-nv52j_calico-system(4d556d16-c6b3-4ab7-996f-c53ed792f703): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:43.319397 kubelet[2813]: E1105 15:55:43.319356 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:44.799401 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:48928.service - OpenSSH per-connection server daemon (10.0.0.1:48928). Nov 5 15:55:44.855776 sshd[5372]: Accepted publickey for core from 10.0.0.1 port 48928 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:44.857753 sshd-session[5372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:44.865733 systemd-logind[1620]: New session 19 of user core. Nov 5 15:55:44.871519 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:55:45.001369 sshd[5375]: Connection closed by 10.0.0.1 port 48928 Nov 5 15:55:45.001795 sshd-session[5372]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:45.006777 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:48928.service: Deactivated successfully. Nov 5 15:55:45.009129 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:55:45.009935 systemd-logind[1620]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:55:45.011796 systemd-logind[1620]: Removed session 19. Nov 5 15:55:49.551834 containerd[1641]: time="2025-11-05T15:55:49.551747504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:55:49.938708 containerd[1641]: time="2025-11-05T15:55:49.938631792Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:50.019676 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:37788.service - OpenSSH per-connection server daemon (10.0.0.1:37788). Nov 5 15:55:50.082892 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 37788 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:50.083779 containerd[1641]: time="2025-11-05T15:55:50.083644491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:55:50.084019 containerd[1641]: time="2025-11-05T15:55:50.083948394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:55:50.084395 kubelet[2813]: E1105 15:55:50.084310 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:50.085019 kubelet[2813]: E1105 15:55:50.084402 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:55:50.085019 kubelet[2813]: E1105 15:55:50.084636 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-757d4c4c4d-gc5kt_calico-system(b05fd954-e904-4df9-a183-93526853dbb1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:50.085019 kubelet[2813]: E1105 15:55:50.084705 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:55:50.085599 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:50.086765 containerd[1641]: time="2025-11-05T15:55:50.086723277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:50.092079 systemd-logind[1620]: New session 20 of user core. Nov 5 15:55:50.100087 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:55:50.234213 sshd[5394]: Connection closed by 10.0.0.1 port 37788 Nov 5 15:55:50.234480 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:50.246758 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:37788.service: Deactivated successfully. Nov 5 15:55:50.248959 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:55:50.249759 systemd-logind[1620]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:55:50.252974 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:37804.service - OpenSSH per-connection server daemon (10.0.0.1:37804). Nov 5 15:55:50.253613 systemd-logind[1620]: Removed session 20. Nov 5 15:55:50.323136 sshd[5407]: Accepted publickey for core from 10.0.0.1 port 37804 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:50.324836 sshd-session[5407]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:50.329548 systemd-logind[1620]: New session 21 of user core. Nov 5 15:55:50.338065 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:55:50.551230 containerd[1641]: time="2025-11-05T15:55:50.550971845Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:50.612253 containerd[1641]: time="2025-11-05T15:55:50.612193365Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:50.612695 containerd[1641]: time="2025-11-05T15:55:50.612232809Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:50.612746 kubelet[2813]: E1105 15:55:50.612449 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:50.612746 kubelet[2813]: E1105 15:55:50.612500 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:50.612746 kubelet[2813]: E1105 15:55:50.612704 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-z9rht_calico-apiserver(e369c643-3d7c-424a-939d-fd5462f1f671): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:50.612843 kubelet[2813]: E1105 15:55:50.612763 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:55:50.612905 containerd[1641]: time="2025-11-05T15:55:50.612856085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:55:50.985145 containerd[1641]: time="2025-11-05T15:55:50.985073593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:51.068981 containerd[1641]: time="2025-11-05T15:55:51.068861020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:55:51.068981 containerd[1641]: time="2025-11-05T15:55:51.068901987Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:55:51.069310 kubelet[2813]: E1105 15:55:51.069253 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:51.069310 kubelet[2813]: E1105 15:55:51.069310 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:55:51.069536 kubelet[2813]: E1105 15:55:51.069486 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:51.069756 containerd[1641]: time="2025-11-05T15:55:51.069725430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:55:51.519754 containerd[1641]: time="2025-11-05T15:55:51.519682248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:51.551005 containerd[1641]: time="2025-11-05T15:55:51.550914132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:55:51.551174 containerd[1641]: time="2025-11-05T15:55:51.551061971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:51.551199 kubelet[2813]: E1105 15:55:51.551166 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:51.551551 kubelet[2813]: E1105 15:55:51.551215 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:55:51.551551 kubelet[2813]: E1105 15:55:51.551400 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d76b985b9-kbchr_calico-apiserver(bc1f133a-26eb-43d7-9fdb-a3e47afd9653): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:51.551551 kubelet[2813]: E1105 15:55:51.551442 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:55:51.551678 containerd[1641]: time="2025-11-05T15:55:51.551484908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:55:51.955299 sshd[5410]: Connection closed by 10.0.0.1 port 37804 Nov 5 15:55:51.955599 sshd-session[5407]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:51.965020 containerd[1641]: time="2025-11-05T15:55:51.964966216Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:51.969060 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:37804.service: Deactivated successfully. Nov 5 15:55:51.971733 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:55:51.972641 systemd-logind[1620]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:55:51.976769 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:37818.service - OpenSSH per-connection server daemon (10.0.0.1:37818). Nov 5 15:55:51.977163 containerd[1641]: time="2025-11-05T15:55:51.977018243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:55:51.977163 containerd[1641]: time="2025-11-05T15:55:51.977117660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:55:51.977529 systemd-logind[1620]: Removed session 21. Nov 5 15:55:51.977668 kubelet[2813]: E1105 15:55:51.977495 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:51.977668 kubelet[2813]: E1105 15:55:51.977562 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:55:51.977873 kubelet[2813]: E1105 15:55:51.977664 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-gf82q_calico-system(5cbe1702-972a-4f84-9d2f-51b96b54edda): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:51.978077 kubelet[2813]: E1105 15:55:51.977895 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:55:52.034528 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 37818 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:52.036453 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:52.041085 systemd-logind[1620]: New session 22 of user core. Nov 5 15:55:52.051061 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:55:52.304343 update_engine[1628]: I20251105 15:55:52.304188 1628 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 5 15:55:52.304343 update_engine[1628]: I20251105 15:55:52.304257 1628 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 5 15:55:52.306328 update_engine[1628]: I20251105 15:55:52.306279 1628 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 5 15:55:52.307037 update_engine[1628]: I20251105 15:55:52.307008 1628 omaha_request_params.cc:62] Current group set to alpha Nov 5 15:55:52.307168 update_engine[1628]: I20251105 15:55:52.307148 1628 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 5 15:55:52.307168 update_engine[1628]: I20251105 15:55:52.307160 1628 update_attempter.cc:643] Scheduling an action processor start. Nov 5 15:55:52.307213 update_engine[1628]: I20251105 15:55:52.307180 1628 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 5 15:55:52.307237 update_engine[1628]: I20251105 15:55:52.307221 1628 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 5 15:55:52.307308 update_engine[1628]: I20251105 15:55:52.307286 1628 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 5 15:55:52.307339 update_engine[1628]: I20251105 15:55:52.307302 1628 omaha_request_action.cc:272] Request: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: Nov 5 15:55:52.307339 update_engine[1628]: I20251105 15:55:52.307313 1628 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 15:55:52.321590 update_engine[1628]: I20251105 15:55:52.321519 1628 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 15:55:52.322276 update_engine[1628]: I20251105 15:55:52.322216 1628 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 15:55:52.326983 locksmithd[1678]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 5 15:55:52.333820 update_engine[1628]: E20251105 15:55:52.333768 1628 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 15:55:52.333902 update_engine[1628]: I20251105 15:55:52.333877 1628 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 5 15:55:52.552246 containerd[1641]: time="2025-11-05T15:55:52.552196549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:55:52.913187 sshd[5427]: Connection closed by 10.0.0.1 port 37818 Nov 5 15:55:52.914650 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:52.930309 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:37818.service: Deactivated successfully. Nov 5 15:55:52.933198 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:55:52.934805 systemd-logind[1620]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:55:52.937795 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:37834.service - OpenSSH per-connection server daemon (10.0.0.1:37834). Nov 5 15:55:52.939375 systemd-logind[1620]: Removed session 22. Nov 5 15:55:53.017169 sshd[5460]: Accepted publickey for core from 10.0.0.1 port 37834 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:53.018747 sshd-session[5460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:53.023953 systemd-logind[1620]: New session 23 of user core. Nov 5 15:55:53.032095 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:55:53.050127 containerd[1641]: time="2025-11-05T15:55:53.050064025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:55:53.064023 containerd[1641]: time="2025-11-05T15:55:53.063844648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:55:53.064023 containerd[1641]: time="2025-11-05T15:55:53.063879964Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:55:53.064237 kubelet[2813]: E1105 15:55:53.064188 2813 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:53.064674 kubelet[2813]: E1105 15:55:53.064244 2813 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:55:53.064674 kubelet[2813]: E1105 15:55:53.064337 2813 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-mswwg_calico-system(48c2a4a5-482d-4600-8d80-4c89933cceaa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:55:53.064674 kubelet[2813]: E1105 15:55:53.064379 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:55:53.279781 sshd[5463]: Connection closed by 10.0.0.1 port 37834 Nov 5 15:55:53.280135 sshd-session[5460]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:53.290408 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:37834.service: Deactivated successfully. Nov 5 15:55:53.293646 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:55:53.295150 systemd-logind[1620]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:55:53.298597 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:37844.service - OpenSSH per-connection server daemon (10.0.0.1:37844). Nov 5 15:55:53.299622 systemd-logind[1620]: Removed session 23. Nov 5 15:55:53.372941 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 37844 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:53.374636 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:53.379937 systemd-logind[1620]: New session 24 of user core. Nov 5 15:55:53.390293 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:55:53.522021 sshd[5478]: Connection closed by 10.0.0.1 port 37844 Nov 5 15:55:53.522452 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:53.529443 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:37844.service: Deactivated successfully. Nov 5 15:55:53.532621 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:55:53.534722 systemd-logind[1620]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:55:53.536457 systemd-logind[1620]: Removed session 24. Nov 5 15:55:57.459767 containerd[1641]: time="2025-11-05T15:55:57.459697660Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc76f6a2ed069f539fd74d0187edac2ce3bc4ef77bfb7cc9ebe54463270af23d\" id:\"740671aee2f5a24e5f1acb71782d73bfeaee6aeb608bb8cc687e05bcce5fbc03\" pid:5503 exited_at:{seconds:1762358157 nanos:459293307}" Nov 5 15:55:57.552165 kubelet[2813]: E1105 15:55:57.552026 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-77b5df4b9c-nv52j" podUID="4d556d16-c6b3-4ab7-996f-c53ed792f703" Nov 5 15:55:58.535967 systemd[1]: Started sshd@24-10.0.0.94:22-10.0.0.1:37854.service - OpenSSH per-connection server daemon (10.0.0.1:37854). Nov 5 15:55:58.600570 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 37854 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:55:58.602885 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:55:58.608294 systemd-logind[1620]: New session 25 of user core. Nov 5 15:55:58.619153 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:55:58.739674 sshd[5520]: Connection closed by 10.0.0.1 port 37854 Nov 5 15:55:58.740196 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Nov 5 15:55:58.746117 systemd[1]: sshd@24-10.0.0.94:22-10.0.0.1:37854.service: Deactivated successfully. Nov 5 15:55:58.748529 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:55:58.749848 systemd-logind[1620]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:55:58.751524 systemd-logind[1620]: Removed session 25. Nov 5 15:56:00.551251 kubelet[2813]: E1105 15:56:00.551181 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-757d4c4c4d-gc5kt" podUID="b05fd954-e904-4df9-a183-93526853dbb1" Nov 5 15:56:02.266088 update_engine[1628]: I20251105 15:56:02.265985 1628 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 5 15:56:02.266088 update_engine[1628]: I20251105 15:56:02.266093 1628 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 5 15:56:02.266688 update_engine[1628]: I20251105 15:56:02.266664 1628 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 5 15:56:02.275142 update_engine[1628]: E20251105 15:56:02.275039 1628 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Nov 5 15:56:02.275142 update_engine[1628]: I20251105 15:56:02.275141 1628 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 5 15:56:03.551477 kubelet[2813]: E1105 15:56:03.551405 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-z9rht" podUID="e369c643-3d7c-424a-939d-fd5462f1f671" Nov 5 15:56:03.758839 systemd[1]: Started sshd@25-10.0.0.94:22-10.0.0.1:55054.service - OpenSSH per-connection server daemon (10.0.0.1:55054). Nov 5 15:56:03.846254 sshd[5535]: Accepted publickey for core from 10.0.0.1 port 55054 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:56:03.848389 sshd-session[5535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:03.855068 systemd-logind[1620]: New session 26 of user core. Nov 5 15:56:03.863275 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:56:04.025156 sshd[5538]: Connection closed by 10.0.0.1 port 55054 Nov 5 15:56:04.025583 sshd-session[5535]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:04.030876 systemd[1]: sshd@25-10.0.0.94:22-10.0.0.1:55054.service: Deactivated successfully. Nov 5 15:56:04.033421 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:56:04.034605 systemd-logind[1620]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:56:04.036827 systemd-logind[1620]: Removed session 26. Nov 5 15:56:04.552863 kubelet[2813]: E1105 15:56:04.552802 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d76b985b9-kbchr" podUID="bc1f133a-26eb-43d7-9fdb-a3e47afd9653" Nov 5 15:56:05.552513 kubelet[2813]: E1105 15:56:05.552432 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-gf82q" podUID="5cbe1702-972a-4f84-9d2f-51b96b54edda" Nov 5 15:56:06.551532 kubelet[2813]: E1105 15:56:06.551415 2813 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-mswwg" podUID="48c2a4a5-482d-4600-8d80-4c89933cceaa" Nov 5 15:56:09.045876 systemd[1]: Started sshd@26-10.0.0.94:22-10.0.0.1:55056.service - OpenSSH per-connection server daemon (10.0.0.1:55056). Nov 5 15:56:09.100015 sshd[5553]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:jxfBzj8t4gNsP6XgB3HCYMs94mi46GFjdNA2wywm1q8 Nov 5 15:56:09.102497 sshd-session[5553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:56:09.115209 systemd-logind[1620]: New session 27 of user core. Nov 5 15:56:09.120304 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:56:09.250345 sshd[5557]: Connection closed by 10.0.0.1 port 55056 Nov 5 15:56:09.250701 sshd-session[5553]: pam_unix(sshd:session): session closed for user core Nov 5 15:56:09.256382 systemd[1]: sshd@26-10.0.0.94:22-10.0.0.1:55056.service: Deactivated successfully. Nov 5 15:56:09.258798 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:56:09.259765 systemd-logind[1620]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:56:09.261433 systemd-logind[1620]: Removed session 27.